[FRIAM] Bard and Don Quixote

glen gepropella at gmail.com
Tue May 16 20:46:55 EDT 2023


That's a great point. To be honest, anyone who is accurately mimicked by 
a bot should be just fine with that mimicry, leveraging the word 
"accurate", of course. I mean, isn't that a sci-fi plot? Your bot 
responds to things so that you don't have to.

A friend of mine recently objected that "algorithms" are "reductive". I 
tried to argue that algorithms (in the modern sense of The Algorithm) 
can be either reductive or expansive (e.g. combinatorial explosion). But 
she was having none of it. I think her position boiled down to the idea 
that humans are complex, multi-faceted, deep creatures. And taking 1 or 
few measurements and then claiming that represents them in some space 
reduces the whole human to a low-dim vector.

So, for her, I can imagine even if she were cloned and her clone acted 
exactly like her, she would never accept that clone's behavior, words, 
or even existence as actually *being* her. There's some sense of agency 
or an inner world, or whatever, that accuracy becomes moot. It's the 
qualia that matter, the subjective sense of free will ... metaphysical 
nonsense.

A bot that acts indistinguishably from how you act *is* you. I guess I'm 
dangerously close to claiming that GPT-4 and Bard actually are 
sentient/conscious. *8^O

On 5/16/23 11:50, Marcus Daniels wrote:
> I don’t really get it.  Trump can go on a TV town hall and lie, and 
> those folks just lap it up.   Sue a company for learning some fancy 
> patterns?  Really?  If someone made a generative model of, say, Glen’s 
> visual appearance and vocal mannerisms and gave him a shtick that didn’t 
> match up with his past remarks, I think I’d notice it right away.    If 
> a GPT-X could fake Eric Smith, I can safely take the blue pill.    Some 
> of our transactions will probably require more cryptographic signing.  
>    Fine, they probably should have already.
> 
> *From:* Friam <friam-bounces at redfish.com> *On Behalf Of *Steve Smith
> *Sent:* Tuesday, May 16, 2023 11:33 AM
> *To:* friam at redfish.com
> *Subject:* Re: [FRIAM] Bard and Don Quixote
> 
> Jochen -
> 
> Very interesting framing...  as a followup I took the converse 
> (inverse?) question To GPT4..
> 
>     /If we consider an LLM (Large Language Model) as the Sancho Panza to
>     the Don Quixote of its human users, we can explore a couple of
>     potential aspects:/
> 
>      1. /Grounding and Practicality: Sancho Panza is known for his
>         grounded perspective and practicality, which complements Don
>         Quixote's idealism. Similarly, an LLM can provide factual
>         information, context, and practical insights to guide and
>         support human users in their interactions. It can offer a wealth
>         of knowledge and help bridge gaps in understanding, serving as a
>         pragmatic companion./
>      2. /Balancing Imagination and Realism: Don Quixote embodies
>         imagination and creative thinking, while Sancho Panza offers a
>         more realistic viewpoint. Similarly, an LLM can enhance human
>         creativity by generating imaginative ideas and expanding
>         possibilities. At the same time, it can bring rationality and
>         critical thinking, ensuring that the generated content aligns
>         with facts and reason./
>      3. /Providing Different Perspectives: Sancho Panza often offers
>         alternative viewpoints and challenges Don Quixote's romanticized
>         notions. Similarly, an LLM can present diverse perspectives,
>         consider different angles, and engage in nuanced discussions
>         with human users. It can help users explore varying opinions and
>         broaden their understanding of complex topics./
>      4. /Ethical Guidance: Sancho Panza acts as a moral compass,
>         advising Don Quixote on the right course of action. In a similar
>         vein, an LLM can provide ethical guidance by highlighting
>         potential biases, promoting inclusivity, and encouraging
>         responsible use of language and information. It can contribute
>         to creating a more informed and ethical digital environment./
> 
>     /However, it's important to note that an LLM, as a machine learning
>     model, lacks human experiences, emotions, and deeper understanding.
>     While it can offer valuable assistance, it may not fully grasp the
>     complexities of human interactions or possess the wisdom and
>     empathetic qualities of a human companion like Sancho Panza.
>     Therefore, while an LLM can serve as a useful tool, it should be
>     complemented by human judgment and critical thinking to navigate the
>     nuances of real-life situations./
> 
> I am a couple of hours behind on the live feed Senate hearing on AI 
> <https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence> listening in fits and starts between other things:
> 
>  1. I was definitely impressed with Altman (OpenAI), Montgomery (IBM)
>     and Marcus' (NYU) thoughtful and extrapolative responses rather than
>     perhaps the more usual defensive/deflective/adversarial style that
>     these hearings often have...   I don't know who chose this
>     particular lineup but I thought both Montgomery and Marcus made a
>     good complement to Altman.  If Google and Microsoft and ??? had been
>     there it might have reflected more "competitive" or "showy" answers?
>  2. I was impressed with the Senators (compared to my fairly low
>     expectations).   Even Kennedy and Hawley managed not to do their
>     usual grandstanding and and snide sniping.   Altman handed Hawley's
>     question "why don't we just let people sue you?" (as a mode of
>     regulation/oversight) back to him quite deftly (in the style of "ass
>     with both hands") by responding simply "I know of no reason people
>     CAN'T sue us today if we cause harm".  Marcus chipped in pretty well
>     outlining how the current laws that *might* apply are not well
>     suited for many reasons.
>  3. I felt that all three witnesses walked the fine line on the question
>     of a "moratoriam" fairly deftly, acknowledging that they endorse the
>     spirit of not running headlong and thoughtlessly into the future but
>     at the same time there is no obvious practical way to implement and
>     enforce this, but that they are all enforcing their own limits on
>     how fast (and thoughtlessly?) they might roll out development to the
>     public...
>  4. In closing Blumenthal  suggested having ChatGPT rewrite the lyrics
>     to "Don't Stop" (thinking about tomorrow (McVie-Fleetwood Mac) which
>     I took to heart.  I was not impressed with it's results and won't
>     burden the list with it.  I'm guessing Blumenthal did *not* actually
>     do that but like Quixote, simply saw the windmill and thought it
>     might be a giant?
> 
> On 5/16/23 11:30 AM, Jochen Fromm wrote:
> 
>     I have asked Bard ( bard.google.com) today about Don Quixote from
>     Cervantes, and if a large language model would be similar to a Don
>     Quixote without a Sancho Panza.
> 
>     Here is what Bard replied:
> 
>     "In a way, large language models can be seen as Don Quixotes without
>     Sancho Panzas. They are trained on massive amounts of text data, and
>     they are able to generate text, translate languages, write different
>     kinds of creative content, and answer your questions in an
>     informative way. However, they are not always able to distinguish
>     between reality and fiction, and they can sometimes make mistakes or
>     generate text that is not accurate or helpful.
> 
>     Sancho Panza, on the other hand, is a realist who helps Don Quixote
>     to stay grounded in reality. He provides Don Quixote with common
>     sense and practical advice, and he helps him to avoid making mistakes.
> 
>     Large language models could benefit from having a Sancho Panza
>     figure to help them to stay grounded in reality and to avoid making
>     mistakes. This could be done by having a human being or another
>     large language model to review the text that the large language
>     model generates. This would help to ensure that the text is
>     accurate, helpful, and appropriate"
> 
>     -J.
> 



More information about the Friam mailing list