<!DOCTYPE html>
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<blockquote>
<p><a
href="https://scitechdaily.com/researchers-decode-how-we-turn-thoughts-into-sentences/"
class="moz-txt-link-freetext">https://scitechdaily.com/researchers-decode-how-we-turn-thoughts-into-sentences/</a></p>
</blockquote>
<p>I'm hoping/expecting some folks here are as fascinated with these
things as I am? LLM's, interperatability, natural vs are to me as
weather/vortices/entropy-intuition is to Nick?<br>
</p>
<p>As someone who spends way too much time composing sentences (in
writing) through this impedence-mismatched interface (keyboard) I
have a strong (if misleading, or at least ideosyncratic)
apprehension of how I might form sentences from thoughts, and
perhaps even forward/back propogate possible expressions and
structures *all the way* to where I imagine my interlocutors
(often all y'all here) reading and responding internally
(mentally) and online. My engagement with the LLMs in "casual
conversation" includes a great deal of this, albeit understanding
that I'm talking to "a stochastic parrot" or more aptly perhaps
"making faces into a funhouse mirror" (reminding me that I really
want to compose a good-faith answer to glen's very sincere and I
think pivotal questions about metaphor). <br>
</p>
<p>I haven't parsed the linked article deeply yet and have not
sought out the actual paper itself yet, but find the ideas
presented very provocative or at least evocative? It triggers
hopeful imaginings about connections with the cortical column work
of Hawkins/Numenta as well as the never ending topics of FriAM: "
Effing the inEffabl"e and "Metaphors all the way Down?" </p>
<p> I don't expect this line of research to *answer* those
questions, but possibly shed some scattered light onto their
periphery (oupsie, I waxed up another metapho to shoot some
curls)? For example, might the <span
style="color: rgb(51, 51, 51); font-family: "Public Sans", system-ui, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">electrocorticography
during ideation-to-speech transmogrification show us how
strongly metaphorical constructions differ from more concise or
formal analogical versions (if they are a spectrum) or how
attempts to "eff the ineffable" might yield widely branching
(bushy) explorations, ending in some kind of truncation by
fatigue or (de)saturation?</span></p>
<blockquote>
<p><a href="https://www.nature.com/articles/s44271-025-00270-1"><span
style="color: rgb(51, 51, 51); font-family: "Public Sans", system-ui, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">https://www.nature.com/articles/s44271-025-00270-1<br>
</span></a></p>
</blockquote>
<p>And are attempts at Interpreting LLMs in some meaningful way
colinear or offer important parallax (to reference the
"steam-engine/thermodynamics" duality)?</p>
<p>And me, here, with obviously "way too much time" on my hands and
a fascination with LLMs and an urgency to try to keep traction on
the increasing slope of "the singularity" and a mild facility with
visual analytics and *I* haven't even begun to keep up... This
list (ironically) was formulated by GPT and I've not (and surely
will not) do much double-checking beyond (hopefully) diving
deep(er) inoto the work. I was mildly surprised there were no
2025 references... I'm guessing the blogs are running commentary
including current work. I'll go click through as soon as I hit
<send> here (imagine the next-token prediction I am doing as
I decide to try to stop typing and hit <send>?)<br>
</p>
<blockquote><strong data-start="3079" data-end="3157">“A Survey of
Explainability and Interpretability in Large Language Models”</strong>
(ACM Computing Surveys, 2024)<br>
Comprehensive classification of methods, with comparisons between
mechanistic and post‑hoc approaches.<br>
<a data-start="3301" data-end="3379" class="" rel="noopener"
target="_new" href="https://arxiv.org/abs/2310.01789">Preprint
link on arXiv: [arXiv:2310.01789]</a></blockquote>
<blockquote><strong data-start="3383" data-end="3430">Anthropic’s
Interpretability Research Pages</strong> (2023–2024)<br>
<a data-start="3526" data-end="3598" class="cursor-pointer"
rel="noopener" target="_new">https://www.anthropic.com/research</a></blockquote>
<blockquote><strong data-start="3602" data-end="3669">OpenAI’s
Technical Blog: “Language Models and Interpretability”</strong>
(2023)<br>
Discussion of interpretability challenges, with examples from
GPT‑4-level models:<br>
<a data-start="3769" data-end="3827" class="moz-txt-link-freetext"
rel="noopener" target="_new" href="https://openai.com/research">https://openai.com/research</a><br>
<br>
<strong data-start="3831" data-end="3880">NeurIPS 2023 Workshop on
XAI for Large Models</strong><br>
Video talks & proceedings with up-to-date methods:<br>
<a data-start="3942" data-end="4032" class="cursor-pointer"
rel="noopener" target="_new"><a
href="https://nips.cc/virtual/2023/workshop/66533"
class="moz-txt-link-freetext">https://nips.cc/virtual/2023/workshop/66533</a><br>
<br>
</a><br>
</blockquote>
</body>
</html>