[FRIAM] "how we turn thoughts into sentences"
Steve Smith
sasmyth at swcp.com
Thu Jul 17 13:48:44 EDT 2025
Clicking through the linked references: The OpenAI "research" reads
like a list of press releases promoting their agenda; Anthropic's list
reads much more like SciAm or similar level of popular publication
trying to actually communicate the issues:
https://transformer-circuits.pub/2022/toy_model/index.html#motivation
My "dive" into this is still very shallow (maybe someone can
de-metaphorize this for me?) but it aligns well with my own ad-hoc/naive
growing apprehensions. Lots here (for me) including what feels like a
strong parallel (apt for metaphorical/analogical domain transfer?) with
genotype/phenotype thinking.
On 7/17/2025 11:20 AM, Steve Smith wrote:
>
> https://scitechdaily.com/researchers-decode-how-we-turn-thoughts-into-sentences/
>
> I'm hoping/expecting some folks here are as fascinated with these
> things as I am? LLM's, interperatability, natural vs are to me as
> weather/vortices/entropy-intuition is to Nick?
>
> As someone who spends way too much time composing sentences (in
> writing) through this impedence-mismatched interface (keyboard) I have
> a strong (if misleading, or at least ideosyncratic) apprehension of
> how I might form sentences from thoughts, and perhaps even
> forward/back propogate possible expressions and structures *all the
> way* to where I imagine my interlocutors (often all y'all here)
> reading and responding internally (mentally) and online. My
> engagement with the LLMs in "casual conversation" includes a great
> deal of this, albeit understanding that I'm talking to "a stochastic
> parrot" or more aptly perhaps "making faces into a funhouse mirror"
> (reminding me that I really want to compose a good-faith answer to
> glen's very sincere and I think pivotal questions about metaphor).
>
> I haven't parsed the linked article deeply yet and have not sought out
> the actual paper itself yet, but find the ideas presented very
> provocative or at least evocative? It triggers hopeful imaginings
> about connections with the cortical column work of Hawkins/Numenta as
> well as the never ending topics of FriAM: " Effing the inEffabl"e and
> "Metaphors all the way Down?"
>
> I don't expect this line of research to *answer* those questions, but
> possibly shed some scattered light onto their periphery (oupsie, I
> waxed up another metapho to shoot some curls)? For example, might
> the electrocorticography during ideation-to-speech transmogrification
> show us how strongly metaphorical constructions differ from more
> concise or formal analogical versions (if they are a spectrum) or how
> attempts to "eff the ineffable" might yield widely branching (bushy)
> explorations, ending in some kind of truncation by fatigue or
> (de)saturation?
>
> https://www.nature.com/articles/s44271-025-00270-1
> <https://www.nature.com/articles/s44271-025-00270-1>
>
> And are attempts at Interpreting LLMs in some meaningful way colinear
> or offer important parallax (to reference the
> "steam-engine/thermodynamics" duality)?
>
> And me, here, with obviously "way too much time" on my hands and a
> fascination with LLMs and an urgency to try to keep traction on the
> increasing slope of "the singularity" and a mild facility with visual
> analytics and *I* haven't even begun to keep up... This list
> (ironically) was formulated by GPT and I've not (and surely will not)
> do much double-checking beyond (hopefully) diving deep(er) inoto the
> work. I was mildly surprised there were no 2025 references... I'm
> guessing the blogs are running commentary including current work.
> I'll go click through as soon as I hit <send> here (imagine the
> next-token prediction I am doing as I decide to try to stop typing and
> hit <send>?)
>
> *“A Survey of Explainability and Interpretability in Large
> Language Models”* (ACM Computing Surveys, 2024)
> Comprehensive classification of methods, with comparisons between
> mechanistic and post‑hoc approaches.
> Preprint link on arXiv: [arXiv:2310.01789]
> <https://arxiv.org/abs/2310.01789>
>
> *Anthropic’s Interpretability Research Pages* (2023–2024)
> https://www.anthropic.com/research
>
> *OpenAI’s Technical Blog: “Language Models and Interpretability”*
> (2023)
> Discussion of interpretability challenges, with examples from
> GPT‑4-level models:
> https://openai.com/research
>
> *NeurIPS 2023 Workshop on XAI for Large Models*
> Video talks & proceedings with up-to-date methods:
> https://nips.cc/virtual/2023/workshop/66533
>
>
>
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoomhttps://bit.ly/virtualfriam
> to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIChttp://friam-comic.blogspot.com/
> archives: 5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250717/39d77807/attachment.html>
More information about the Friam
mailing list