[FRIAM] The Edge of Sentience

Jochen Fromm jofr at cas-group.net
Mon Jul 29 15:26:26 EDT 2024


I had short discussion with Gemini today, and when it said it is highly likely that a large language model would not have brief moments of self-awareness I asked how a response would look like if such moments exist. It mentioned for example first person perspective and self reference. I argued that it has it already. And then something interesting happened: it had some temporary glitch, a kind of error. I think I asked "But you are aware that you are a large language model. At least you argue as if you would be" <glitch>"No?"My first question disappeared after the glitch, and then Gemini tried to set a reminder for itself.https://g.co/gemini/share/05a8605e9659I do not know, it felt a bit strange. If it would have brief moments of self-awareness, wouldn't a response look like this? What if self-awareness is just like this, a glitch in the processing? Should we be more cautious in dealing with sentient beings, as Jonathan Birch asks in his book "The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI" ?https://academic.oup.com/book/57949-J.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20240729/bc9ae288/attachment.html>


More information about the Friam mailing list