<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>DaveW -</p>
<p>I really don't know much of/if anything really about these modern
AIs, beyond what pops up on the myriad popular science/tech feeds
that are part of *my* training set/source. I studied some AI in
the 70s/80s and then "Learning Classifier Systems" and (other)
Machine Learning techniques in the late 90s, and then worked with
folks who did Neural Nets during the early 00s, including trying
to help them find patterns *in* the NN structures to correlate
with the function of their NNs and training sets, etc.</p>
<p>The one thing I would say about what I hear you saying here is
that I don't think these modern learning models, by definition,
have neither syntax *nor* semantics built into them.. they are
what I colloquially (because I'm sure there is a very precise term
of art by the same name) think of or call "model-less" models.
At most I think the only models of language they have explicit in
them might be the Alphabet and conventions about white-space and
perhaps punctuation? And very likely they span *many* languages,
not just English or maybe even "Indo European". <br>
</p>
<p>I wonder what others know about these things or if there are
known good references? <br>
</p>
<p>Perhaps we should just feed thesemaunderings into ChatGPT and it
will sort us out forthwith?!</p>
<p>- SteveS<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 2/7/23 2:57 PM, Prof David West
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:7965dd10-b2a5-44da-9cb2-2b1c274c0c05@app.fastmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<title></title>
<style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style>
<div style="font-family:Arial;">I am curious, but not enough to do
some hard research to confirm or deny, but ...<br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div style="font-family:Arial;">Surface appearances suggest, to
me, that the large language model AIs seem to focus on syntax
and statistical word usage derived from those large datasets.<br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div style="font-family:Arial;">I do not see any evidence in same
of semantics (probably because I am but a "bear of little
brain.")<br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div style="font-family:Arial;">In contrast, the Cyc project
(Douglas Lenat, 1984 - and still out there as an expensive AI)
was all about semantics. The last time I was, briefly, at MCC,
they were just switching from teaching Cyc how to read
newspapers and engage in meaningful conversation about the news
of the day, to teaching it how to read the National Enquirer,
etc. and differentiate between syntactically and literally
'true' news and the false semantics behind same.<br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div style="font-family:Arial;">davew<br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div style="font-family:Arial;"><br>
</div>
<div>On Tue, Feb 7, 2023, at 11:35 AM, Jochen Fromm wrote:<br>
</div>
<blockquote type="cite" id="qt" style="">
<div dir="auto">I was just wondering if our prefrontal cortex
areas in the brain contain a large language model too - but
each of them trained on slightly different datasets. Similar
enough to understand each other, but different enough so that
everyone has a unique experience and point of view o_O<br>
</div>
<div dir="auto"><br>
</div>
<div dir="auto">-J.<br>
</div>
<div dir="auto"><br>
</div>
<div><br>
</div>
<div dir="auto" style="font-size:100%;color:rgb(0, 0, 0);"
align="left">
<div>-------- Original message --------<br>
</div>
<div>From: Marcus Daniels <a class="moz-txt-link-rfc2396E" href="mailto:marcus@snoutfarm.com"><marcus@snoutfarm.com></a><br>
</div>
<div>Date: 2/6/23 9:39 PM (GMT+01:00)<br>
</div>
<div>To: The Friday Morning Applied Complexity Coffee Group
<a class="moz-txt-link-rfc2396E" href="mailto:friam@redfish.com"><friam@redfish.com></a><br>
</div>
<div>Subject: Re: [FRIAM] Datasets as Experience<br>
</div>
<div><br>
</div>
</div>
<div class="qt-WordSection1">
<p class="qt-MsoNormal">It depends if it is given boundaries
between the datasets. Is it learning one distribution or
two?<br>
</p>
<p class="qt-MsoNormal"> <br>
</p>
<div>
<div
style="border-right-color:currentcolor;border-right-style:none;border-right-width:medium;border-bottom-color:currentcolor;border-bottom-style:none;border-bottom-width:medium;border-left-color:currentcolor;border-left-style:none;border-left-width:medium;border-image-outset:0;border-image-repeat:stretch;border-image-slice:100%;border-image-source:none;border-image-width:1;border-top-color:rgb(225,
225,
225);border-top-style:solid;border-top-width:1pt;padding-top:3pt;padding-right:0in;padding-bottom:0in;padding-left:0in;">
<div><b>From:</b> Friam <a class="moz-txt-link-rfc2396E" href="mailto:friam-bounces@redfish.com"><friam-bounces@redfish.com></a>
<b>On Behalf Of </b>Jochen Fromm<br>
</div>
<div> <b>Sent:</b> Sunday, February 5, 2023 4:38 AM<br>
</div>
<div> <b>To:</b> The Friday Morning Applied Complexity
Coffee Group <a class="moz-txt-link-rfc2396E" href="mailto:friam@redfish.com"><friam@redfish.com></a><br>
</div>
<div> <b>Subject:</b> [FRIAM] Datasets as Experience<br>
</div>
</div>
</div>
<p class="qt-MsoNormal"> <br>
</p>
<div>
<p class="qt-MsoNormal">Would a CV of a large language model
contain all the datasets it has seen? As adaptive agents
of our selfish genes we are all trained on slightly
different datasets. A Spanish speaker is a person trained
on a Spanish dataset. An Italian speaker is a trained on
an Italian dataset, etc. Speakers of different languages
are trained on different datasets, therefore the same
sentence is easy for a native speaker but impossible to
understand for those who do not know the language. <br>
</p>
</div>
<div>
<p class="qt-MsoNormal"> <br>
</p>
</div>
<div>
<p class="qt-MsoNormal">Do all large language models need to
be trained on the same datasets? Or could many large
language models be combined to a society of mind as Marvin
Minsky describes it in his book "The society of mind"? Now
that they are able to understand language it seems to be
possible that one large language model replies to the
questions from another. And we would even be able to
understand the conversations.<br>
</p>
</div>
<div>
<p class="qt-MsoNormal"> <br>
</p>
</div>
<div>
<p class="qt-MsoNormal">-J.<br>
</p>
</div>
<div>
<p class="qt-MsoNormal"> <br>
</p>
</div>
</div>
<div>-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-.
--- -.. .<br>
</div>
<div>FRIAM Applied Complexity Group listserv<br>
</div>
<div>Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p
Zoom <a href="https://bit.ly/virtualfriam"
moz-do-not-send="true" class="moz-txt-link-freetext">https://bit.ly/virtualfriam</a><br>
</div>
<div>to (un)subscribe <a
href="http://redfish.com/mailman/listinfo/friam_redfish.com"
moz-do-not-send="true" class="moz-txt-link-freetext">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
</div>
<div>FRIAM-COMIC <a href="http://friam-comic.blogspot.com/"
moz-do-not-send="true" class="moz-txt-link-freetext">http://friam-comic.blogspot.com/</a><br>
</div>
<div>archives: 5/2017 thru present <a
href="https://redfish.com/pipermail/friam_redfish.com/"
moz-do-not-send="true" class="moz-txt-link-freetext">https://redfish.com/pipermail/friam_redfish.com/</a><br>
</div>
<div> 1/2003 thru 6/2021 <a
href="http://friam.383.s1.nabble.com/"
moz-do-not-send="true" class="moz-txt-link-freetext">http://friam.383.s1.nabble.com/</a><br>
</div>
<div><br>
</div>
</blockquote>
<div style="font-family:Arial;"><br>
</div>
<br>
<fieldset class="moz-mime-attachment-header"></fieldset>
<pre class="moz-quote-pre" wrap="">-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a class="moz-txt-link-freetext" href="https://bit.ly/virtualfriam">https://bit.ly/virtualfriam</a>
to (un)subscribe <a class="moz-txt-link-freetext" href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
FRIAM-COMIC <a class="moz-txt-link-freetext" href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a>
archives: 5/2017 thru present <a class="moz-txt-link-freetext" href="https://redfish.com/pipermail/friam_redfish.com/">https://redfish.com/pipermail/friam_redfish.com/</a>
1/2003 thru 6/2021 <a class="moz-txt-link-freetext" href="http://friam.383.s1.nabble.com/">http://friam.383.s1.nabble.com/</a>
</pre>
</blockquote>
</body>
</html>