[FRIAM] Google Engineer Thinks AI Bot Has Become Sentient

Jochen Fromm jofr at cas-group.net
Mon Jun 13 16:40:37 EDT 2022


I think the capabilities of large language models are really impressive. The language of these models is not grounded, as this article says, but in principle it is possible to do it.https://techmonitor.ai/technology/ai-and-automation/foundation-models-may-be-future-of-ai-theyre-also-deeply-flawedTake for example a robot, connect it to the Internet and a large language model, and add an additional OCR layer in between. The result? Probably creepy and uncanny, but if it works we would most likely think such an actor would be sentient. The replies in the LaMDA dialog transcript look indistinguishable from a human. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917-J.
-------- Original message --------From: glen <gepropella at gmail.com> Date: 6/13/22  17:14  (GMT+01:00) To: friam at redfish.com Subject: Re: [FRIAM] Google Engineer Thinks AI Bot Has Become Sentient "Remarkable" in the sense of "worthy of remark"? Yeah, maybe.LaMDA: Language Models for Dialog Applicationshttps://arxiv.org/abs/2201.08239Personally, I think we can attribute Lemoine's belief in LaMDA's sentience is an artifact of his religious belief. It's not exclusive to Christianity, though. One of the risks of the positions taken by those who believe in the reality of things like Jungian archetypes is false attribution. And it's not limited to anthropomorphic attribution. To the person with a hammer, everything looks like a nail. Even if such beliefs have some objective utility in some contexts, that utility is not likely to be that transitive to other contexts.I suppose this is why I'm more sympathetic to the (obviously still false in its extreme) behaviorist or skeptical position (cf https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12445). I.e. it's completely irrelevant whether or not you *claim* to have feelings and emotions. What's needed for knowledge (justified true belief) is a parallax pointing to the same conclusion, preferably including some largely objective angles.An objective angle on LaMDA might well be available from IIT operating over some (very large) log/trace data from the executing program. *That* plus the bot claiming it's sentient would give me pause.On 6/12/22 08:28, Jochen Fromm wrote:> A Google engineer said he was placed on leave after claiming an AI chatbot was sentient. The fact that he thinks it would be sentient is remarkable, isn't it?> https://www.businessinsider.com/google-engineer-thinks-artificial-intelligence-bot-has-become-sentient-2022-6-- ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .FRIAM Applied Complexity Group listservFridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom https://bit.ly/virtualfriamto (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.comFRIAM-COMIC http://friam-comic.blogspot.com/archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20220613/629a9c68/attachment.html>


More information about the Friam mailing list