<div dir="ltr">Thanks for passing this along, Steve. I wish, however, the authors of this short piece would have included a definition of, in their usage, "Large Language Models" and "Small Language Models." Perhaps I can find those in the larger paper.<div>Tom</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 7, 2023 at 12:34 PM Steve Smith <<a href="mailto:sasmyth@swcp.com">sasmyth@swcp.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>This popular-press article came through my Google News feed
recently which I thought might be useful to the
Journalists/English-Majors on the list to help understand how LLMs
work, etc. When I read it in detail (forwarded from my TS
(TinyScreenPhone) on my LS (Large Screen Laptop)) I found it a bit
more detailed and technical than I'd expected, but nevertheless
rewarding and possibly offering some traction to
Journalism/English majors as well as those with a larger
investment in the CS/Math implied.<br>
</p>
<blockquote>
<p><a href="https://www.anthropic.com/index/decomposing-language-models-into-understandable-components" target="_blank">Decomposing
Language Models into Understandable Components<br>
</a></p>
<blockquote>
<blockquote>
<blockquote><img src="https://efficient-manatee.transforms.svdcdn.com/production/images/Untitled-Artwork-11.png?w=2880&h=1620&auto=compress%2Cformat&fit=crop&dm=1696477668&s=d32264d5f5e32c79026b8e310e415c74" alt="" width="237" height="133"></blockquote>
</blockquote>
</blockquote>
</blockquote>
<p>and the (more) technical paper behind the article</p>
<blockquote>
<p><a href="https://transformer-circuits.pub/2023/monosemantic-features/index.html" target="_blank">https://transformer-circuits.pub/2023/monosemantic-features/index.html<br>
</a></p>
</blockquote>
Despite having sent a few dogs into vaguely similar scuffles in my
careen(r):<br>
<blockquote><a href="https://apps.dtic.mil/sti/tr/pdf/ADA588086.pdf" target="_blank">Faceted
Ontologies for Pre Incident Indicator Analysis </a><br>
<a href="https://www.ehu.eus/ccwintco/uploads/c/c6/HAIS2010_925.pdf" target="_blank">SpindleViz</a><br>
...<br>
</blockquote>
<p>... I admit to finding this both intriguing and well over my head
on casual inspection... the (metaphorical?) keywords that drew me
in most strongly included <i>Superposition</i> and <i>Thought
Vectors</i>, though they are (nod to Glen) probably riddled
(heaped, overflowing, bursting, bloated ... ) with excess
meaning.<br>
</p>
<p><a href="https://gabgoh.github.io/ThoughtVectors/" target="_blank">https://gabgoh.github.io/ThoughtVectors/</a></p>
<p>This leads me (surprise!) to an open ended discursive series of
thoughts probably better left for a separate posting (probably
rendered in a semasiographic language like <a href="https://en.wikipedia.org/wiki/Heptapod_languages#Orthography" target="_blank">Heptapod
B</a>). <br>
</p>
<p><must... stop... now... ></p>
<p>- Steve<br>
</p>
</div>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .<br>
FRIAM Applied Complexity Group listserv<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam" rel="noreferrer" target="_blank">https://bit.ly/virtualfriam</a><br>
to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/" rel="noreferrer" target="_blank">https://redfish.com/pipermail/friam_redfish.com/</a><br>
1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/" rel="noreferrer" target="_blank">http://friam.383.s1.nabble.com/</a><br>
</blockquote></div>