[FRIAM] "how we turn thoughts into sentences"

Steve Smith sasmyth at swcp.com
Thu Jul 17 13:20:49 EDT 2025


    https://scitechdaily.com/researchers-decode-how-we-turn-thoughts-into-sentences/

I'm hoping/expecting some folks here are as fascinated with these things 
as I am?  LLM's, interperatability, natural vs are to me as 
weather/vortices/entropy-intuition is to Nick?

As someone who spends way too much time composing sentences (in writing) 
through this impedence-mismatched interface (keyboard) I have a strong 
(if misleading, or at least ideosyncratic) apprehension of how I might 
form sentences from thoughts, and perhaps even forward/back propogate 
possible expressions and structures *all the way* to where I imagine my 
interlocutors (often all y'all here) reading and responding internally 
(mentally) and online.    My engagement with the LLMs in "casual 
conversation" includes a great deal of this, albeit understanding that 
I'm talking to "a stochastic parrot" or more aptly perhaps "making faces 
into a funhouse mirror" (reminding me that I really want to compose a 
good-faith answer to glen's very sincere and I think pivotal questions 
about metaphor).

I haven't parsed the linked article deeply yet and have not sought out 
the actual paper itself yet, but find the ideas presented very 
provocative or at least evocative?  It triggers hopeful imaginings about 
connections with the cortical column work of Hawkins/Numenta as well as 
the never ending topics of FriAM: " Effing the inEffabl"e and "Metaphors 
all the way Down?"

  I don't expect this line of research to *answer* those questions, but 
possibly shed some scattered light onto their periphery (oupsie, I waxed 
up another metapho to shoot some curls)?   For example, might the 
electrocorticography during ideation-to-speech transmogrification show 
us how strongly metaphorical constructions differ from more concise or 
formal analogical versions (if they are a spectrum) or how attempts to 
"eff the ineffable" might yield widely branching (bushy) explorations, 
ending in some kind of truncation by fatigue or (de)saturation?

    https://www.nature.com/articles/s44271-025-00270-1
    <https://www.nature.com/articles/s44271-025-00270-1>

And are attempts at Interpreting LLMs in some meaningful way colinear or 
offer important parallax (to reference the "steam-engine/thermodynamics" 
duality)?

And me, here, with obviously "way too much time" on my hands and a 
fascination with LLMs and an urgency to try to keep traction on the 
increasing slope of "the singularity" and a mild facility with visual 
analytics and *I* haven't even begun to keep up...     This list 
(ironically) was formulated by GPT and I've not (and surely will not) do 
much double-checking beyond (hopefully) diving deep(er) inoto the work.  
I was mildly surprised there were no 2025 references...   I'm guessing 
the blogs are running commentary including current work.  I'll go click 
through as soon as I hit <send> here (imagine the next-token prediction 
I am doing as I decide to try to stop typing and hit <send>?)

    *“A Survey of Explainability and Interpretability in Large Language
    Models”* (ACM Computing Surveys, 2024)
    Comprehensive classification of methods, with comparisons between
    mechanistic and post‑hoc approaches.
    Preprint link on arXiv: [arXiv:2310.01789]
    <https://arxiv.org/abs/2310.01789>

    *Anthropic’s Interpretability Research Pages* (2023–2024)
    https://www.anthropic.com/research

    *OpenAI’s Technical Blog: “Language Models and Interpretability”* (2023)
    Discussion of interpretability challenges, with examples from
    GPT‑4-level models:
    https://openai.com/research

    *NeurIPS 2023 Workshop on XAI for Large Models*
    Video talks & proceedings with up-to-date methods:
    https://nips.cc/virtual/2023/workshop/66533

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250717/4074ef1e/attachment.html>


More information about the Friam mailing list