[FRIAM] Epistemic Holography
Pieter Steenekamp
pieters at randcontrols.co.za
Tue May 20 02:19:03 EDT 2025
I agree with the article’s view that LLMs act more like holograms than
mirrors, helping humans clarify what we already understand about certain
topics. While LLMs are powerful and can generate novel insights, as Google
DeepMind suggests, they still fall short of replicating certain aspects of
human intelligence. Current AI frameworks, though advancing through
increased computing power and refined methods, remain limited by their
reliance on human-established foundations. They excel at interpreting,
completing, and creating within those boundaries, but lack the depth of
true independent intelligence.
Predicting the future of AI is speculative, as there’s no solid basis for
extrapolation. However, I believe a breakthrough in AI frameworks will
eventually lead to a form of native intelligence. Until then, humans remain
essential, and the synergy of humans and AI will drive remarkable
achievements. Neither humans nor AI alone will dominate, but together, they
will unlock extraordinary potential.
On Tue, 20 May 2025 at 03:12, steve smith <sasmyth at swcp.com> wrote:
>
> https://www.psychologytoday.com/us/blog/the-digital-self/202505/llms-arent-mirrors-theyre-holograms
>
> I know a bit about holography and holograms and have been known to use
> optical metaphor for information analysis (semantic lensing and ontological
> faceting) but I don't know how I feel about this characterization of
> LLMs.
>
> Holograms Don’t Store Images, They Store Possibility
>
> A hologram <https://science.howstuffworks.com/hologram.htm> doesn’t
> capture a picture. It encodes an interference pattern. Or more simply, it
> creates a map of how light interacts with an object. When illuminated
> properly, it reconstructs a three-dimensional image that appears real from
> multiple angles. Here’s the truly fascinating part: If you break that
> hologram into pieces, each fragment still contains the whole image, just at
> a lower resolution. The detail is degraded, but the structural integrity
> remains.
>
> LLMs function in a curiously similar way. They don’t store knowledge as
> discrete facts or memories. Instead, they encode relationships—statistical
> patterns between words, contexts, and meanings—across a high-dimensional
> vector space. When prompted, they don’t retrieve information. They
> reconstruct it, generating language that aligns with the expected shape of
> an answer. Even from vague or incomplete input, they produce responses that
> feel coherent and often surprisingly complete. The completeness isn’t the
> result of understanding. It’s the result of well-tuned reconstruction.
>
> I do see some intuitive motivation for applying the holographic or
> diffraction/reproduction through interference analogy for both LLMs
> (Semantic Holograms) and Diffusion Models (Perceptual Holograms)?
>
> I'm not very well versed in psychology but do find the whole article
> compelling (though not necessarily conclusive)... others here may have
> different parallax to offer?
>
> - Steve
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. /
> ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: 5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250520/94114c18/attachment.html>
More information about the Friam
mailing list