[FRIAM] ChatGPT is not very smart...

Santafe desmith at santafe.edu
Sun Feb 12 05:35:35 EST 2023


> On Feb 10, 2023, at 2:31 PM, Marcus Daniels <marcus at snoutfarm.com> wrote:
> 
> I don't think it is necessarily the case a very large neural net would need to backtrack.   It could devote more and more resources to different hypotheses so long as it was not energy constrained.

In some sense, I understand that this must be right, because your language is more faithful to how the algorithm works.

I have another colleague who has commented that one needs to think differently about large neural-net learners because in very high dimensions, there are not the same kinds of snags as those encountered in lower dimensions.  There is always “a way around”, and therefore gradient descent works better than one has come to expect.

Yet in the output, there still seems to be a tree somehow; maybe from things that aren’t flexible by a design logic (?):

Time runs forward, for the machine as for the rest of us.

Having said something, the chatbot can’t un-say it.  

So then there is a design decision: Do we take what we have said as a constraint on what we can say next?  I can well imagine that there is strong reinforcement for some version of that, because it is inherent in fluidity, and even the notion of a “topic” in the pragmatics of a string of sentences.  If topicalization somehow means strict retention (doubling down), then our speech lives on a tree, and there seem to be certain ways of putting “more resources on different hypotheses” that become closed to us.  Glen’s characterization as mansplaining encapsulates this route nicely in one word.

Or, does the chatbot have a way to actually say Wow, shit.  I was confused; start over.  Then more of the dimensionality would be available.

Are there any chatbots that operate in this general space that people would call “admitting one’s mistakes”?  If not, why not?  Is it a design consideration in what to reward?  I can’t imagine it’s a property of the training set in the abstract, as there’s all sorts of literature on admitting mistakes (the wedding guest in the Ancient Mariner “a sadder but a wiser man”).  Are those patterns somehow “harder” to learn, even though they are there?  What would make them harder, as a category of composition patterns?

A “third way”, I guess, is what one could call the “dumb pathological liar” (a sort of trump-like character), who simply rambles along in mutually contradicting utterances and doesn’t “structurally” acknowledge their existence.  Yet at some level of speech, that isn’t happening, because there is local fluidity, topics, and so forth.  At that lower level, the preference given to continuity seems very strong.


A fun aside about topicalization.  Our friend, the historical linguist Sergei Starostin, whose English was to my ear essentially blank and perfect, was once telling me that for a Russian speaker, the last latent point of fear in speaking English was the use of the definite article.  He said that no matter how many years on, he stil had this nagging stress about whether he was using it in the right places.  I don’t remember whether it was in that conversation — I think it was, and that the conversation was also about topicalization in Japanese with ga and wa, and in Chinese with topic-comment phrase structure (which French also uses in many constructions, but less structured around a “pivot” as it would be called for Chinese) — but either then or subsequently I came to appreciate what a problem topicalization is.  I would say it lives in speech at the level of pragmatics, in that one can almost see the “attention” as a kind of searchlight that is moving around, and that pragmatics is supposed to respect in the unfolding of a conversation for the conversation to be coherent.  The challenge of marking topic — one form of “definiteness” of the definite article versus an indefinite one — is that it involves this ever-negotiated problem of how much either from the discourse or from presumed shared knowledge the listener has in primed-awareness at any given moment.  “The” drifts back and forth between implicit definiteness (I can just say “the moon”, without a further specifying clause, presuming that we both know there is only one), versus definiteness that demans a specifier (the senator from Wisconsin, when first introduced in the discourse).  I guess “the” in English is unusually fraught, in that its insertion or omission also modulates category terms versus literal instances (the AI chatbots say silly things, versus AI chatbots say silly things), and all these functional roles are in tension with each other at the same time.

So it’s all very attention-scope semantic.  Yet it can fail to be semantic at other levels.  What it would be about the encoding of speech that makes them so different is still hard for me to see.

Eric




> -----Original Message-----
> From: Friam <friam-bounces at redfish.com> On Behalf Of Santafe
> Sent: Friday, February 10, 2023 3:11 AM
> To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
> Subject: Re: [FRIAM] ChatGPT is not very smart...
> 
> So probably this has all been analyzed to death already, by people who, unlike me, know what they are talking about.
> 
> But in re-reading it, I feel like the structure of the problem is characterizable.  
> 
> It is as if “facts” that are constraints on the sentences to be composed are ordered in a way that is somewhat like lexicographic ordering.  
> 
> Lexicographically, azzzzzzz is before zaaaaaaa; no subsequent letters even get consideration (much less the set of them all in aggregate), until the first letter has dictated as much of the sorting as its values can index.
> 
> Likewise, somewhere chatGPT was stuck in a local valley, where there was a “fact” that a particular list of two states is “the core of the answer” to this set of questions, and membership in the list is the first condition in a selection-tree for replies.  However tortured, the following sentences have to operate within a reality that has already been truncated by the “first fact”.
> 
> (Again, since this is obvious, my even belaboring it is kind of dull once I see it on the page.)
> 
> Come to think of it, it reminds me too of Vygotsky’s claim that children first assign category terms by what he called “family resemblances”, and only later at the age where some developmental threshold is crossed, do they form “predicates” in the sense we would use the term in langauge semantics.  “Family resemblances” can be, effectively, anything, are very idiosyncratic to the experiential history of any particular child-in-context, and in that sense are fragile categories.  The distinction in predicates, once they emerge, is that they seem to be much less fragile, which amounts to being exchangeable across thinkers and across situations for a single thinker.
> 
> As predicates, membership-in-a-list, letter-count, being-a-state-name, are of quite unlike types.  As there is a kind of Venn-diagram intersection logic that can be applied to letter-count and being-a-state-name, which does not exist in general form for membership in arbitrary lists, we learn that the latter predicates should get priority in navigating streams of thoughts and of sentences.  But if, to chatGPT, those are not yet predicates, but rather various modes of family resemblance, all comparably eligible, one can scan how a dialog of the below form would be a derivable outcome.
> 
> If that is the right analysis, the step of getting stuck in a pre-emptive membership-in-a-list commitment is not the revealing, or by that criterion the interesting, part of the dialog below.  The interesting part becomes the suddenly much-higher need for “cleverness”, to find flowing sentences that can be forced to the far side of an unresolvable contradiction.  It’s like my laptop, when some internet-check-in step becomes unsatisfiable, and it will dump more and more CPU into trying to do something that got mis-configured, the fan ramps up, and this will spiral indefinitely until I stop what I am doing to kill the offending process.  One can imagine that “learning” what predicates “are”, that they “have” logics, and what those logics “are”, is a feedback loop of trying to avoid the mental costs of getting into blind alleys.  The deeper the backtrack to find the unbinding choice, the harder the learning program is, until we get up to scientific categories, where it becomes quite difficult to recognize that we had invalid meanings for “time”, “space”, and “state”.  But a kind of continuous process of rewarding laziness across all those levels of organization of thought.
> 
> I also, of course, cannot keep out of mind the George Santos characters or similar pathological liars, who live full-time in this navigating-snags mode.  Police interrogators think they can wear even pathological liars down — catch them out — because the effort level is high enough that eventually nobody can sustain it.  But at the same time, humans have a wide talent range for different tasks, and I could believe that for the pathological liar, this snag-navigating can be done much more fluently and with lower effort, than the same degree of performance would demand of a “normie”.
> 
> All the above feels about as satisfying as listening to somebody explain a joke.  Oh well.
> 
> Eric
> 
> 
>> On Feb 2, 2023, at 1:49 PM, Barry MacKichan <barry.mackichan at mackichan.com> wrote:
>> 
>> or is obsessed with four-letter words. From an email from another group. Paragraphs with an initial capital are ChatGPT’s responses. The others are input:
>> 
>> Chat GPT dialog, courtesy of Paul Ginsparg:
>> 
>> 
>> “how many states in the united states have four letter names?
>> 
>> “There are actually two states in the United States that have four-letter names: Iowa and Ohio. Both states were named after Native American words, with Iowa derived from the word "ayuxba," which means "this is the place," and Ohio coming from the Iroquois word "ohi-yo," meaning "great river.
>> 
>> “what about utah?
>> 
>> “Utah is not a four-letter word. It is a state in the western United States, named after the Ute tribe of Native Americans. The word "Utah" is derived from the Ute word "yuttahih," which means "people of the mountains."
>> 
>> “how many letters does utah have?
>> 
>> “The word "Utah" has four letters. It is the name of a state in the western United States, known for its natural beauty, including parks like Arches National Park and Bryce Canyon National Park. The state was named after the Ute tribe of Native Americans and has a rich history and culture.
>> 
>> “then why did you say utah is not a four-letter word?
>> 
>> “I apologize if my previous response was unclear. I did not mean to imply that the word "Utah" is not a valid word or that it does not refer to the state in the United States. My previous response was intended to address the specific question of whether Utah was one of the states in the United States with a four-letter name. As I mentioned, there are only two states in the United States with four-letter names: Iowa and Ohio. Utah, while it is a four-letter word, is not one of those states. I apologize if my previous response was confusing. Please let me know if you have any other questions.
>> 
>> “but didn't you just say that the word utah has four letters and is a state in the united states?
>> 
>> “Yes, that is correct. The word "Utah" has four letters. As I 
>> mentioned in my previous response, it is the name of a state in the 
>> western United States. The state was named after the Ute tribe of 
>> Native Americans and is known for its beautiful landscapes and outdoor 
>> recreational opportunities. I apologize if my previous response was 
>> unclear. My previous response was intended to address the specific 
>> question of whether Utah was one of the two states in the United 
>> States with four-letter names. As I mentioned, there are only two 
>> states with four-letter names: Iowa and Ohio. While "Utah" is a 
>> four-letter word, it is not one of those states. I apologize for any 
>> confusion. Please let me know if you have any other questions
>> 
>> —Barry
>> 
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,pkzJoqS7ZJSzrO3fkN3B7sE6VWNyJ70z7kBaJSIyNmJMVK3pczt5BelHF5-orSU-i-6g2NDKjf0gaQNoqdD-wtD73p-DgqZACvtKa29pzw,,&typo=1
>> to (un)subscribe 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
>> an%2flistinfo%2ffriam_redfish.com&c=E,1,L_i-AoHvwYSpaBKqz2Q8_yjYjplPNb
>> B8mQZgxKgubB_G6XIg1hjyBIh0iBS4u0sZNVzRfy0kng127VZj5tkotrkMgaNPfXrlZ-Lu
>> YiniFNpqaWHTndw2lcP37l4,&typo=1 FRIAM-COMIC 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
>> t.com%2f&c=E,1,eGnmPRPtDigLRI0mT2nwSTQXV4MtPKpcfrtiOzQBcPYhBUmOwLm_ct3
>> B8V2bWtB0KyrvxFMqyIJuy-RAJCe5YAs1nK8-VmwiD5bE6dbw8Os,&typo=1
>> archives:  5/2017 thru present 
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipe
>> rmail%2ffriam_redfish.com%2f&c=E,1,Xck2nPoNRcuVwcyW4_Tf7wHv16URawm1GvF
>> SlSgT2cWkzJH56wdA-iRCPQNdmPzbhZ0UNFvBECjAqUj4KmVF_u7J1RUv5hmMaNlAWJyJ_
>> mnXL9c,&typo=1
>> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> 
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,qm-kZ7WAZ8B0-We9qKJ87X7ncCOSbNapKTf9HtDX8K54OCxGMEh5algExVpoi7X-dmMGvnaY3hH6bOd0746xFAQd1jBgUEtHFD-RByto_Kr0fQ,,&typo=1
> to (un)subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,FsG4z_ORiXaiT7FrZAeIyblvDYF9yburp_lIhpNHSA5cI6nbvAA4iwNlQD5N0AgMfHVWKGu48BN4rjE9AMqdW-VrGYVVE4loXIsii9YpTDXg-JWHWg,,&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,S36Nk21pTKBlbrFM6KQiuffBDi6h9Oq9dUTDEEQb6RQGPlkJuLezh_UJlWUiip-DNuuXfSzELZdyuMTpyik6JtQyogscNGo8a2kbElVIxNCkfVRG76K4q3rZrbI,&typo=1
> archives:  5/2017 thru present https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,uKCZNeaC5SKE-zB6RJRHn035S5XZncYP79kpKegbMSf_dYJ4lJdm0Qkk0alCXJAojhlE4Gn3-Qt7e1ZiAihAnh_bPne2_jnaaLNSD_vY&typo=1
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,_901N4jj8xXaRiheHv7oSHUtVmJ_rJlBoBn9gCvlQ_xevBrMLSeKGSxYkBkWJ0bE9JZavZcb_ukGZ6aDhqj-cYiu5pVIoWFVxRdQ0A8BGqHC89MH&typo=1
> to (un)subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,OHqZnE0YKDIOeUbW7BugvDWSRrK6-PIgK2rheS_u35Fu0GyBOY4sAbpA-tqDaWi2cU-ylb9bk_x7oxxqthhRMK28VSPyzUc9uY5bhtf0qkdnUVOBuhL5q1Tw&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,TkzuVpvFPrt2xJeJXIxA-k7gieedXooq3b1Vr68uGZr9Du0WoUBrj7_SMLZa1GvkvKM2Sg9ZPpvO126hqSlzSeOnhfsCBZaXK4Onj-XuoTC2PpoC-qAo&typo=1
> archives:  5/2017 thru present https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f&c=E,1,afAy5vbK8jr3hcY_0-d6tCPArMMG2vMtiVAItc_guVUTtccchlrJVHLFQZdYDXUe-Wg-MvTXAynFdG0bbj1f5wOMJAo01AIXPWJTZdbE55yZ&typo=1
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/




More information about the Friam mailing list