[FRIAM] Google Engineer Thinks AI Bot Has Become Sentient

glen gepropella at gmail.com
Mon Jun 13 18:26:16 EDT 2022


Hm. Now that you say it that way, how would laughter come through? It would be nearly trivial to add LoL or emoticons to the lexicon. My guess is we'd still see through it a bit. I predict the bot would use it like a polite Boomer. GenX and later start getting a bit wonky with when they use phrases like "lulz". And I can't even cap with the kids on twitch these days. I have no idea what they're saying most of the time.

Surely, though, they've trained some large language models on texts with onomatopoeia, though. Back in the late '80s, we'd use tokens in brackets like [grin], [ahem], or [pffft]. I still do, though my use has changed. Interjections like "Awesome!" are clearly there. It can't be much of a stretch to add mimicry of physiological seeming things.

The Sophistry of dialogs like Socrates' is another matter. Recognition of the absurd, riddles, paradox, would be an interesting thing to test. I just prompted my GPT-3 (primed with biomedical literature):

gepr: "Interleukin and posture walk into a bar ..."
['The bartender says "We don\'t serve your kind here."']

On 6/13/22 15:06, Jochen Fromm wrote:
> Yes, humor is important. Good point. Laughing is one of the things we do and apes do not. For me this is where it starts to get interesting: when we look at the things we do that apes do not, like language, culture, art, or writing systems. I mean before the first civilizations in Mesopotamia, ancient Greece and ancient Egypt appeared there were just clans fighting against each other to determine their place in the pecking order. Primates do this too.
> 
> We have this constant drive to resolve inconsistencies which is related to the confirmation bias. Every joke starts with an inconsistency that is resolved by an insight. Maybe we need just one basic mechanism to create a self-supervised agent that gets smarter bit by bit: artificial curiosity, i.e. a mechanism that seeks new or inconsistent information and rewards the resolution of inconsistencies. A bit like science itself.
> 
> -J.
> 
> 
> -------- Original message --------
> From: glen <gepropella at gmail.com>
> Date: 6/13/22 23:14 (GMT+01:00)
> To: friam at redfish.com
> Subject: Re: [FRIAM] Google Engineer Thinks AI Bot Has Become Sentient
> 
> IDK, I wouldn't say the dialog was indistinguishable from a human. When I ask people things like "Do you have feelings?", they respond pretty aggressively or defensively. While I agree that all the sentences were well-formed and sensible (SSI), they lacked the reflective quality of actual human responses. Plus, there wasn't any humor as far as I could tell. You'd expect that in a conversation with such ridiculous questions. That would be true even if, especially if, you were talking to a kid or a typically blue collar sort. [⛧]
> 
> That's why it read, to me, like one of those fake dialogs intended to teach some lesson or other. And it wasn't even Socratic. This is where Aaronson's comment ("but can I run my own tests?") plays in. Meno or Euthyphro might *seem* indistinguishable from a human ... but they're not, they're fantastically designed to render the just-so condition the ideologue intends. Perhaps Lahontan's Kondiaronk was different?
> 
> 
> [⛧] I once picked up a hitchhiker on my way home from work back in TX. Since the ride was quite long, we discussed quite a bit. As we were driving through town, I commented that most of the people looked, to me, like they were asleep ... metaphorically. The hitcher said, "They look awake to me", literally.
> 
> On 6/13/22 13:40, Jochen Fromm wrote:
>  > I think the capabilities of large language models are really impressive. The language of these models is not grounded, as this article says, but in principle it is possible to do it.
>  > https://techmonitor.ai/technology/ai-and-automation/foundation-models-may-be-future-of-ai-theyre-also-deeply-flawed
>  >
>  > Take for example a robot, connect it to the Internet and a large language model, and add an additional OCR layer in between. The result? Probably creepy and uncanny, but if it works we would most likely think such an actor would be sentient. The replies in the LaMDA dialog transcript look indistinguishable from a human.
>  > https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
>  >
>  > -J.
>  >
>  >
>  > -------- Original message --------
>  > From: glen <gepropella at gmail.com>
>  > Date: 6/13/22 17:14 (GMT+01:00)
>  > To: friam at redfish.com
>  > Subject: Re: [FRIAM] Google Engineer Thinks AI Bot Has Become Sentient
>  >
>  > "Remarkable" in the sense of "worthy of remark"? Yeah, maybe.
>  >
>  > LaMDA: Language Models for Dialog Applications
>  > https://arxiv.org/abs/2201.08239
>  >
>  > Personally, I think we can attribute Lemoine's belief in LaMDA's sentience is an artifact of his religious belief. It's not exclusive to Christianity, though. One of the risks of the positions taken by those who believe in the reality of things like Jungian archetypes is false attribution. And it's not limited to anthropomorphic attribution. To the person with a hammer, everything looks like a nail. Even if such beliefs have some objective utility in some contexts, that utility is not likely to be that transitive to other contexts.
>  >
>  > I suppose this is why I'm more sympathetic to the (obviously still false in its extreme) behaviorist or skeptical position (cf https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12445). I.e. it's completely irrelevant whether or not you *claim* to have feelings and emotions. What's needed for knowledge (justified true belief) is a parallax pointing to the same conclusion, preferably including some largely objective angles.
>  >
>  > An objective angle on LaMDA might well be available from IIT operating over some (very large) log/trace data from the executing program. *That* plus the bot claiming it's sentient would give me pause.
>  >
>  > On 6/12/22 08:28, Jochen Fromm wrote:
>  >  > A Google engineer said he was placed on leave after claiming an AI chatbot was sentient. The fact that he thinks it would be sentient is remarkable, isn't it?
>  >  > https://www.businessinsider.com/google-engineer-thinks-artificial-intelligence-bot-has-become-sentient-2022-6


-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list