<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>I tripped over (in my Gnewsfeed) <a moz-do-not-send="true"
href="https://www.marktechpost.com/2023/04/06/8-potentially-surprising-things-to-know-about-large-language-models-llms/">an
article that seemed to speak more clearly</a> to some of my
maunderings:</p>
<blockquote>
<h1 class="entry-title" style="box-sizing: border-box;
font-family: roboto, sans-serif; color: rgb(0, 0, 0); margin:
0px 0px 7px; font-size: 41px; line-height: 50px; overflow-wrap:
break-word; font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; letter-spacing: normal; text-align:
start; text-indent: 0px; text-transform: none; white-space:
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial; font-weight: 400;"><a moz-do-not-send="true"
href="https://www.marktechpost.com/2023/04/06/8-potentially-surprising-things-to-know-about-large-language-models-llms/"><font
size="2">8 Potentially Surprising Things To Know About Large
Language Models LLMs</font></a></h1>
</blockquote>
<div class="td-module-meta-info" style="box-sizing: border-box;
font-family: "open sans", "open sans regular",
sans-serif; font-size: 11px; margin-bottom: 16px; line-height: 1;
min-height: 17px; color: rgb(0, 0, 0); font-style: normal;
font-variant-ligatures: normal; font-variant-caps: normal;
font-weight: 400; letter-spacing: normal; orphans: 2; text-align:
start; text-indent: 0px; text-transform: none; white-space:
normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width:
0px; background-color: rgb(255, 255, 255);
text-decoration-thickness: initial; text-decoration-style:
initial; text-decoration-color: initial;"><font size="4">And the
paper it summarizes (with a similar title, more detail and
references):</font></div>
<blockquote>
<div class="td-module-meta-info" style="box-sizing: border-box;
font-family: "open sans", "open sans
regular", sans-serif; font-size: 11px; margin-bottom: 16px;
line-height: 1; min-height: 17px; color: rgb(0, 0, 0);
font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: 400; letter-spacing:
normal; orphans: 2; text-align: start; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;"><a moz-do-not-send="true"
href="https://arxiv.org/pdf/2304.00612.pdf"><font size="4"> 8
Things to know about Large Language Models - Samuel R Bowman</font></a><br>
</div>
</blockquote>
<p>And in particular this point made:</p>
<blockquote>
<ol style="box-sizing: border-box; padding: 0px; margin-bottom:
26px; color: rgb(0, 0, 0); font-family: Verdana,
"system-ui", -apple-system, "segoe ui",
Roboto, Oxygen, Ubuntu, Cantarell, "open sans",
"helvetica neue", sans-serif; font-size: 15px;
font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: 400; letter-spacing:
normal; orphans: 2; text-align: start; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;" start="3">
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 0px;"><strong
style="box-sizing: border-box; font-weight: 700;">LLMs
frequently acquire and employ external-world
representations.</strong></li>
</ol>
<p style="box-sizing: border-box; margin-top: 0px; margin-bottom:
26px; overflow-wrap: break-word; color: rgb(0, 0, 0);
font-family: Verdana, "system-ui", -apple-system,
"segoe ui", Roboto, Oxygen, Ubuntu, Cantarell,
"open sans", "helvetica neue", sans-serif;
font-size: 15px; font-style: normal; font-variant-ligatures:
normal; font-variant-caps: normal; font-weight: 400;
letter-spacing: normal; orphans: 2; text-align: start;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;"> More and more evidence suggests that LLMs build
internal representations of the world, allowing them to reason
at an abstract level insensitive to the specific language form
of the text. The evidence for this phenomenon is strongest in
the largest and most recent models, so it should be anticipated
that it will grow more robust when systems are scaled up more.
Nevertheless, current LLMs need to do this more effectively and
effectively.</p>
<p style="box-sizing: border-box; margin-top: 0px; margin-bottom:
26px; overflow-wrap: break-word; color: rgb(0, 0, 0);
font-family: Verdana, "system-ui", -apple-system,
"segoe ui", Roboto, Oxygen, Ubuntu, Cantarell,
"open sans", "helvetica neue", sans-serif;
font-size: 15px; font-style: normal; font-variant-ligatures:
normal; font-variant-caps: normal; font-weight: 400;
letter-spacing: normal; orphans: 2; text-align: start;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;">The following findings, based on a wide variety of
experimental techniques and theoretical models, support this
assertion.</p>
<ul style="box-sizing: border-box; padding: 0px; margin-bottom:
26px; color: rgb(0, 0, 0); font-family: Verdana,
"system-ui", -apple-system, "segoe ui",
Roboto, Oxygen, Ubuntu, Cantarell, "open sans",
"helvetica neue", sans-serif; font-size: 15px;
font-style: normal; font-variant-ligatures: normal;
font-variant-caps: normal; font-weight: 400; letter-spacing:
normal; orphans: 2; text-align: start; text-indent: 0px;
text-transform: none; white-space: normal; widows: 2;
word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;">
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 10px;">The internal color
representations of models are highly consistent with empirical
findings on how humans perceive color.</li>
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 10px;">Models can conclude
the author’s knowledge and beliefs to foretell the document’s
future course.</li>
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 10px;">Stories are used to
inform models, which then change their internal
representations of the features and locations of the objects
represented in the stories.</li>
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 10px;">Sometimes, models can
provide information on how to depict strange things on paper.</li>
<li style="box-sizing: border-box; line-height: inherit;
margin-left: 21px; margin-bottom: 0px;">Many commonsense
reasoning tests are passed by models, even ones like the
Winograd Schema Challenge, that are made to have no textual
hints to the answer.</li>
</ul>
<p style="box-sizing: border-box; margin-top: 0px; margin-bottom:
26px; overflow-wrap: break-word; color: rgb(0, 0, 0);
font-family: Verdana, "system-ui", -apple-system,
"segoe ui", Roboto, Oxygen, Ubuntu, Cantarell,
"open sans", "helvetica neue", sans-serif;
font-size: 15px; font-style: normal; font-variant-ligatures:
normal; font-variant-caps: normal; font-weight: 400;
letter-spacing: normal; orphans: 2; text-align: start;
text-indent: 0px; text-transform: none; white-space: normal;
widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px;
background-color: rgb(255, 255, 255); text-decoration-thickness:
initial; text-decoration-style: initial; text-decoration-color:
initial;">These findings counter the conventional wisdom that
LLMs are merely statistical next-word predictors and can’t
generalize their learning or reasoning beyond text.</p>
</blockquote>
<p></p>
<div class="moz-cite-prefix">On 4/6/23 8:27 AM, Steve Smith wrote:<br>
</div>
<blockquote type="cite"
cite="mid:e851b7ee-f179-440a-e3ba-ef5b15c0eb53@swcp.com">I have
been reading Jeff Hawkins' _1000 Brains_ which is roughly *his*
take on AI from the perspective of the Neuroscience *he* has been
doing for a few decades, including building models of the
neocortex.
<br>
<br>
What struck me strongly was how much *I* expect anything I'd want
to call artificial *consciousness* to engage in "co-munnication"
in the strongest sense. Glen regularly admonishes us that
"communication" may be an illusion and something we don't actually
*do* or maybe more to the the point "it doesn't mean what we think
it means"?
<br>
<br>
So for all the parlor tricks I've enjoyed playing with chatGPT and
DALL-E and maybe even more spectacularly the myriad examples
*others* have teased out of those systems, I am always looking for
what sort of "internal state" these systems are exposing to me in
their "utterances". And by extension, I am looking to see if it
is in any way apprehending *me* through my questions and prompts.
<br>
<br>
Dialog with chatGPT feels pretty familiar to me, as if I'm
conversing with an unusually polite and cooperative polymath. It
is freeing to feel I can ask "it" any question which I can
formulate and can expect back a pretty *straight* answer if not
always one I was hoping for. "It" seems pretty insightful and
usually picks up on the nuances of my questions. As often as
not, I need to follow up with refined questions which channel the
answers away from the "mundane or obvious" but when I do, it
rarely misses a trick or is evasive or harps on something from
it's own (apparent) agenda. It only does that when I ask it
questions about it's own nature, formulation, domain and then it
just seems blunted as if it has a lawyer or politician
intercepting some of those questions and answering them for it.
<br>
<br>
I have learned to "frame" my questions by first asking it to defer
it's response until I've given it some ... "framing" for the
actual question. Otherwise I go through the other series of
steps where I have to re-ask the same question with more and more
context or ask a very long and convoluted question. At first it
was a pleasure to be able to unlimber my
convoluted-question-generator and have it (not mis) understand me
and even not seem to "miss a trick". As I learned to generate
several framing statements before asking my question, I have found
that I *can* give it too many constraints (apparently) such that
it respects some/most of my framing but then avoids or ignores
other parts. At that point I have to ask follow-up, elaborating,
contextualizing questions.
<br>
<br>
I do not yet feel like I am actually seeing into chatGPT's soul or
in any way being seen by it. That will be for a future
generation I suspect. Otherwise it is one hella "research
assistant" and "spitball partner" on most any topic I've
considered that isn't too contemporary (training set ended 2021?).
<br>
<br>
- Steve
<br>
<br>
On 4/4/23 5:54 PM, Prof David West wrote:
<br>
<blockquote type="cite">Based on the flood of stories about
ChatAI, it appears:
<br>
- they can 'do' math and 'reason' scientificdally
<br>
- they can generate essays, term papers, etc.
<br>
- they can engage in convincing dialog/conversations
<br>
- as "therapists"
<br>
- as "girlfriends" (I haven't seen any stories about women
falling in love with their AI)
<br>
- as kinksters
<br>
- they can write code
<br>
<br>
The writing code ability immediately made me wonder if, given a
database of music instead of text, they could write music?
<br>
<br>
The dialog /conversation ability makes me wonder about more
real-time collaborative interaction, improv acting / comedy? Or,
pair programming? The real-time aspect is critical to my
question, as I believe there is something qualitatively
different between two people doing improv or pair programming
than simply engaging in dialog. I think I could make a much
stronger argument in the case of improv music, especially jazz,
but AIs aren't doing that yet.
<br>
<br>
davew
<br>
<br>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -..
.
<br>
FRIAM Applied Complexity Group listserv
<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
<a class="moz-txt-link-freetext" href="https://bit.ly/virtualfriam">https://bit.ly/virtualfriam</a>
<br>
to (un)subscribe
<a class="moz-txt-link-freetext" href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
<br>
FRIAM-COMIC <a class="moz-txt-link-freetext" href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a>
<br>
archives: 5/2017 thru present
<a class="moz-txt-link-freetext" href="https://redfish.com/pipermail/friam_redfish.com/">https://redfish.com/pipermail/friam_redfish.com/</a>
<br>
1/2003 thru 6/2021 <a class="moz-txt-link-freetext" href="http://friam.383.s1.nabble.com/">http://friam.383.s1.nabble.com/</a>
<br>
</blockquote>
<br>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
<br>
FRIAM Applied Complexity Group listserv
<br>
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
<a class="moz-txt-link-freetext" href="https://bit.ly/virtualfriam">https://bit.ly/virtualfriam</a>
<br>
to (un)subscribe
<a class="moz-txt-link-freetext" href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
<br>
FRIAM-COMIC <a class="moz-txt-link-freetext" href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a>
<br>
archives: 5/2017 thru present
<a class="moz-txt-link-freetext" href="https://redfish.com/pipermail/friam_redfish.com/">https://redfish.com/pipermail/friam_redfish.com/</a>
<br>
1/2003 thru 6/2021 <a class="moz-txt-link-freetext" href="http://friam.383.s1.nabble.com/">http://friam.383.s1.nabble.com/</a>
<br>
<br>
</blockquote>
</body>
</html>