[FRIAM] Bard and Don Quixote

glen gepropella at gmail.com
Wed May 17 14:03:05 EDT 2023


Yes, I tried to admit up front that "accurate", "exact", "indistiguishable", etc. are fraught. But the more interesting question is about subjectivity, pain, consciousness, reduction, training time, etc. One thing we often forget is the relationship between sequential and parallel processes. The idea that a person/organism is trained "for 75 years" relies on something sequential ... IDK what, but something. I can't help but go back to Against Narrativity. I feel very episodic. And my identity isn't tied up in things like birthdays or remembering exactly when, say, I first drove a car or went on my first date or whatever. I don't remember lots of these things ... even skills I had once (nearly) mastered like writing C code without looking at a manual or using an IDE, are almost completely gone.

So, in that context, it seems perfectly reasonable that a bot, which relies on parallelism a LOT, could be trained up to act like me, now (i.e. within a band of ± 5 years). Maybe it takes 5 years to do it if the training is more sequential ... maybe some of that sequentiality can be parallelized so that it happens faster? IDK. And your mileage may vary. Many of you are narrative people, who "identify" as this or that thing and have identified that way for decades. (Of course, I doubt you actually *are* narrative... you just consistently trick yourself into thinking you are ... and society reinforces that narrative. But that's a thoroughly unjustified conjecture on my part. I'm sure there's plenty of variation.)

For subjectivity, all that's required, I think, is self-attention, which these bots have, if only in a primitive form. So my answer to you is, "yes", a bot that mimics me could only do it well if it experienced chronic spine pain, sporadic headaches, etc. Of course, whether the quality of the bot's self-attention is similar to the quality of my self-attention is an unanswerable, perhaps even nonsense, question. But it would have to have self-attention.

But none of this seems reductive to me. Youtube literally cannot reduce me. My doctor cannot reduce me to my electronic health record. Etc. Such measures are focused aspects, not reductions. My friend doesn't have much of a footprint on the internet. So, because all she has is a hammer, everything looks like a nail. But my footprint is pretty large. I have content on substack, wordpress, usenet, mailing lists, yaddayadda as well as youtube, spotify, twitch, etc. And that's over and above things like my EHR(s), bank accounts, credit cards, incorporation records, IRS filings, etc. which I insist on accessing over the internet. There's plenty of data *there*. Getting at it all so that the bot could be trained might be persnickety. But it could be done if someone were sufficiently wealthy and motivated. And if they did that, I wouldn't be offended. I'd be awestruck. (I still might have to sue them, of course, in the interests of my family, friends, and colleagues.)

On 5/17/23 08:58, Prof David West wrote:
> My sympathies would be with your friend—until such time as a*/_"clone exactly like her ... behavior, words, or even existence..." _/*was demonstrated.
> 
> "Exactly" is a big word! and I would add "completely."
> 
> Even on a single dimension, say use of language, the standard of exact and complete is hard to satisfy.
> I have no problem believing that a chat-bot could write an academic paper or either of my books; put together, and deliver in my voice, a lecture ; play bar-trivia at the pub; or carry on a convincing conversation. I have no doubt that, in the very near future, the same bot might be able to project a video that included mannerisms and simulation of the way I pace around a classroom.
> 
> But exactitude would require, not only, all the things I do do, and the idiosyncrasies in the way that I do them, but also the idiosyncrasies of my inabilities: I can never get the crossword clues involving popular culture, for example.
> 
> If a clone is built that "walks like a duck and quacks like a duck" but does not migrate or lay eggs; is it really a duck?
> 
> I would concede the equivalence issue of means or mechanisms behind the observable; e.g., it does not matter if the observed behavior results from electrons in gold wires or electrons in dendrites. But I would at least raise the question as to whether, in specific instances, a 'subjective' behind the behavior is or is not critical.
> 
> For example, and forgive the personal, you have mentioned being in pain all of your life. Would it be necessary for a bot to "feel pain" as you have in order to "act exactly like you?" Or is there an "algorithmic equivalent" possible for the bot to utilize in order to obtain unerring verisimilitude?
> 
> Then there is the whole question of experience in general. Would *_/I/_* really be */_me_/*, sans the LSD trips over the years? If not, then how will the bot "calculate" for itself, identical or at least highly similar, experience equivalents.
> 
> Even if, in principle, it were possible to devise algorithms and programs that did result in behavior that mimicked Dave at every stage of its existence, will those algorithms be invented and programs written before the heat death of the universe? You cannot attempt to finesse this quest by invoking "self-learning" because then you need a training set that is at least as extensive as the 75 year training set that the mechanism you would have me be, has utilized to become me.
> 
> I might agree that, in principle,*/_"A bot that acts indistinguishably from how you act *is* you," _/*I think the implication of the word "indistinguishably" is a bar that will never be attained.
> 
> davew
> 
> 
> 
> On Tue, May 16, 2023, at 6:46 PM, glen wrote:
>  > That's a great point. To be honest, anyone who is accurately mimicked by
>  > a bot should be just fine with that mimicry, leveraging the word
>  > "accurate", of course. I mean, isn't that a sci-fi plot? Your bot
>  > responds to things so that you don't have to.
>  >
>  > A friend of mine recently objected that "algorithms" are "reductive". I
>  > tried to argue that algorithms (in the modern sense of The Algorithm)
>  > can be either reductive or expansive (e.g. combinatorial explosion). But
>  > she was having none of it. I think her position boiled down to the idea
>  > that humans are complex, multi-faceted, deep creatures. And taking 1 or
>  > few measurements and then claiming that represents them in some space
>  > reduces the whole human to a low-dim vector.
>  >
>  > So, for her, I can imagine even if she were cloned and her clone acted
>  > exactly like her, she would never accept that clone's behavior, words,
>  > or even existence as actually *being* her. There's some sense of agency
>  > or an inner world, or whatever, that accuracy becomes moot. It's the
>  > qualia that matter, the subjective sense of free will ... metaphysical
>  > nonsense.
>  >
>  > A bot that acts indistinguishably from how you act *is* you. I guess I'm
>  > dangerously close to claiming that GPT-4 and Bard actually are
>  > sentient/conscious. *8^O
>  >
>  > On 5/16/23 11:50, Marcus Daniels wrote:
>  >> I don’t really get it.  Trump can go on a TV town hall and lie, and
>  >> those folks just lap it up.   Sue a company for learning some fancy
>  >> patterns?  Really?  If someone made a generative model of, say, Glen’s
>  >> visual appearance and vocal mannerisms and gave him a shtick that didn’t
>  >> match up with his past remarks, I think I’d notice it right away.    If
>  >> a GPT-X could fake Eric Smith, I can safely take the blue pill.    Some
>  >> of our transactions will probably require more cryptographic signing.
>  >>    Fine, they probably should have already.
>  >>
>  >> *From:* Friam <friam-bounces at redfish.com <mailto:friam-bounces at redfish.com>> *On Behalf Of *Steve Smith
>  >> *Sent:* Tuesday, May 16, 2023 11:33 AM
>  >> *To:* friam at redfish.com <mailto:friam at redfish.com>
>  >> *Subject:* Re: [FRIAM] Bard and Don Quixote
>  >>
>  >> Jochen -
>  >>
>  >> Very interesting framing...  as a followup I took the converse
>  >> (inverse?) question To GPT4..
>  >>
>  >>     /If we consider an LLM (Large Language Model) as the Sancho Panza to
>  >>     the Don Quixote of its human users, we can explore a couple of
>  >>     potential aspects:/
>  >>
>  >>      1. /Grounding and Practicality: Sancho Panza is known for his
>  >>         grounded perspective and practicality, which complements Don
>  >>         Quixote's idealism. Similarly, an LLM can provide factual
>  >>         information, context, and practical insights to guide and
>  >>         support human users in their interactions. It can offer a wealth
>  >>         of knowledge and help bridge gaps in understanding, serving as a
>  >>         pragmatic companion./
>  >>      2. /Balancing Imagination and Realism: Don Quixote embodies
>  >>         imagination and creative thinking, while Sancho Panza offers a
>  >>         more realistic viewpoint. Similarly, an LLM can enhance human
>  >>         creativity by generating imaginative ideas and expanding
>  >>         possibilities. At the same time, it can bring rationality and
>  >>         critical thinking, ensuring that the generated content aligns
>  >>         with facts and reason./
>  >>      3. /Providing Different Perspectives: Sancho Panza often offers
>  >>         alternative viewpoints and challenges Don Quixote's romanticized
>  >>         notions. Similarly, an LLM can present diverse perspectives,
>  >>         consider different angles, and engage in nuanced discussions
>  >>         with human users. It can help users explore varying opinions and
>  >>         broaden their understanding of complex topics./
>  >>      4. /Ethical Guidance: Sancho Panza acts as a moral compass,
>  >>         advising Don Quixote on the right course of action. In a similar
>  >>         vein, an LLM can provide ethical guidance by highlighting
>  >>         potential biases, promoting inclusivity, and encouraging
>  >>         responsible use of language and information. It can contribute
>  >>         to creating a more informed and ethical digital environment./
>  >>
>  >>     /However, it's important to note that an LLM, as a machine learning
>  >>     model, lacks human experiences, emotions, and deeper understanding.
>  >>     While it can offer valuable assistance, it may not fully grasp the
>  >>     complexities of human interactions or possess the wisdom and
>  >>     empathetic qualities of a human companion like Sancho Panza.
>  >>     Therefore, while an LLM can serve as a useful tool, it should be
>  >>     complemented by human judgment and critical thinking to navigate the
>  >>     nuances of real-life situations./
>  >>
>  >> I am a couple of hours behind on the live feed Senate hearing on AI
>  >> <https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence <https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence>> listening in fits and starts between other things:
>  >>
>  >>  1. I was definitely impressed with Altman (OpenAI), Montgomery (IBM)
>  >>     and Marcus' (NYU) thoughtful and extrapolative responses rather than
>  >>     perhaps the more usual defensive/deflective/adversarial style that
>  >>     these hearings often have...   I don't know who chose this
>  >>     particular lineup but I thought both Montgomery and Marcus made a
>  >>     good complement to Altman.  If Google and Microsoft and ??? had been
>  >>     there it might have reflected more "competitive" or "showy" answers?
>  >>  2. I was impressed with the Senators (compared to my fairly low
>  >>     expectations).   Even Kennedy and Hawley managed not to do their
>  >>     usual grandstanding and and snide sniping.   Altman handed Hawley's
>  >>     question "why don't we just let people sue you?" (as a mode of
>  >>     regulation/oversight) back to him quite deftly (in the style of "ass
>  >>     with both hands") by responding simply "I know of no reason people
>  >>     CAN'T sue us today if we cause harm".  Marcus chipped in pretty well
>  >>     outlining how the current laws that *might* apply are not well
>  >>     suited for many reasons.
>  >>  3. I felt that all three witnesses walked the fine line on the question
>  >>     of a "moratoriam" fairly deftly, acknowledging that they endorse the
>  >>     spirit of not running headlong and thoughtlessly into the future but
>  >>     at the same time there is no obvious practical way to implement and
>  >>     enforce this, but that they are all enforcing their own limits on
>  >>     how fast (and thoughtlessly?) they might roll out development to the
>  >>     public...
>  >>  4. In closing Blumenthal  suggested having ChatGPT rewrite the lyrics
>  >>     to "Don't Stop" (thinking about tomorrow (McVie-Fleetwood Mac) which
>  >>     I took to heart.  I was not impressed with it's results and won't
>  >>     burden the list with it.  I'm guessing Blumenthal did *not* actually
>  >>     do that but like Quixote, simply saw the windmill and thought it
>  >>     might be a giant?
>  >>
>  >> On 5/16/23 11:30 AM, Jochen Fromm wrote:
>  >>
>  >>     I have asked Bard ( bard.google.com) today about Don Quixote from
>  >>     Cervantes, and if a large language model would be similar to a Don
>  >>     Quixote without a Sancho Panza.
>  >>
>  >>     Here is what Bard replied:
>  >>
>  >>     "In a way, large language models can be seen as Don Quixotes without
>  >>     Sancho Panzas. They are trained on massive amounts of text data, and
>  >>     they are able to generate text, translate languages, write different
>  >>     kinds of creative content, and answer your questions in an
>  >>     informative way. However, they are not always able to distinguish
>  >>     between reality and fiction, and they can sometimes make mistakes or
>  >>     generate text that is not accurate or helpful.
>  >>
>  >>     Sancho Panza, on the other hand, is a realist who helps Don Quixote
>  >>     to stay grounded in reality. He provides Don Quixote with common
>  >>     sense and practical advice, and he helps him to avoid making mistakes.
>  >>
>  >>     Large language models could benefit from having a Sancho Panza
>  >>     figure to help them to stay grounded in reality and to avoid making
>  >>     mistakes. This could be done by having a human being or another
>  >>     large language model to review the text that the large language
>  >>     model generates. This would help to ensure that the text is
>  >>     accurate, helpful, and appropriate"
>  >>
>  >>     -J.


-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list