<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.EmailStyle22
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal>Jochen, <o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>At the risk of being tiresome (the Erics will agree that I am, but for quite different reasons), any object, organic or silicon, that systematically avoids a class of stimuli when those stimuli intrude on them, is doing fear. Whether they are having fear is a question of language. But for a Pragmati[ci]st such as myself, the language is crucial insofar as it determines where we will look next in our exploration of the phenomenon. If we are led to look away from the whole organism and its surroundings, we are missing the boat on fear. <o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal>N<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal>Nick Thompson<o:p></o:p></p><p class=MsoNormal><a href="mailto:ThompNickSon2@gmail.com"><span style='color:#0563C1'>ThompNickSon2@gmail.com</span></a><o:p></o:p></p><p class=MsoNormal><a href="https://wordpress.clarku.edu/nthompson/"><span style='color:#0563C1'>https://wordpress.clarku.edu/nthompson/</span></a><o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><div style='border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal><b>From:</b> Friam <friam-bounces@redfish.com> <b>On Behalf Of </b>Jochen Fromm<br><b>Sent:</b> Monday, August 23, 2021 3:37 PM<br><b>To:</b> The Friday Morning Applied Complexity Coffee Group <friam@redfish.com><br><b>Subject:</b> Re: [FRIAM] Eternal questions<o:p></o:p></p></div></div><p class=MsoNormal><o:p> </o:p></p><div><p class=MsoNormal>I have a small remote controlled R2D2 robot from Sphero. A present of my wife. If it is remote controlled it behaves as if it has intentions, desires and emotions, but of course it does not. The behavior just mirrors my intentions. <o:p></o:p></p></div><div><p class=MsoNormal><a href="https://youtu.be/YVwszeU3TVI">https://youtu.be/YVwszeU3TVI</a><o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>I would say real emotions are the tool used by genes to control their survival vehicles which work by setting up different levels of action readiness in certain situations (as Nico Frijda says). It should be possible to create artificial emotions that work like this.<o:p></o:p></p></div><div><p class=MsoNormal><a href="https://books.google.com/books/about/The_Emotions.html?id=QkNuuVf-pBMC&redir_esc=y">https://books.google.com/books/about/The_Emotions.html?id=QkNuuVf-pBMC&redir_esc=y</a><o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>For example if we want a robot that autocharges itself we must create some sort of "hunger for energy". If we want a robot that protects itself against physical danger we must provide it with a sense of fear.<o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>-J.<o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal><span style='color:black'>-------- Original message --------<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='color:black'>From: Pieter Steenekamp <<a href="mailto:pieters@randcontrols.co.za">pieters@randcontrols.co.za</a>> <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='color:black'>Date: 8/23/21 12:05 (GMT+01:00) <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='color:black'>To: The Friday Morning Applied Complexity Coffee Group <<a href="mailto:friam@redfish.com">friam@redfish.com</a>> <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='color:black'>Subject: Re: [FRIAM] Eternal questions <o:p></o:p></span></p></div><div><p class=MsoNormal><span style='color:black'><o:p> </o:p></span></p></div><div><p class=MsoNormal>The creators of the Aibo robot dog say it has ‘real emotions and instinct’. This is obviously not true, it's just an illusion.<br><br>But then, according to Daniel Dennett, human consciousness is just an illusion.<br><a href="https://ase.tufts.edu/cogstud/dennett/papers/illusionism.pdf">https://ase.tufts.edu/cogstud/dennett/papers/illusionism.pdf</a><o:p></o:p></p></div><p class=MsoNormal><o:p> </o:p></p><div><div><p class=MsoNormal>On Mon, 23 Aug 2021 at 09:18, Jochen Fromm <<a href="mailto:jofr@cas-group.net">jofr@cas-group.net</a>> wrote:<o:p></o:p></p></div><blockquote style='border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in 6.0pt;margin-left:4.8pt;margin-right:0in'><div><div><p class=MsoNormal>"In today’s AI universe, all the eternal questions (about intentionality, consciousness, free will, mind-body problem...) have become engineering problems", argues this Guardian article. <o:p></o:p></p></div><div><p class=MsoNormal><a href="https://www.theguardian.com/science/2021/aug/10/dogs-inner-life-what-robot-pet-taught-me-about-consciousness-artificial-intelligence">https://www.theguardian.com/science/2021/aug/10/dogs-inner-life-what-robot-pet-taught-me-about-consciousness-artificial-intelligence</a><o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div><div><p class=MsoNormal>-J.<o:p></o:p></p></div><div><p class=MsoNormal><o:p> </o:p></p></div></div><p class=MsoNormal>- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .<br>FRIAM Applied Complexity Group listserv<br>Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam">bit.ly/virtualfriam</a><br>un/subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>FRIAM-COMIC <a href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a><br>archives: <a href="http://friam.471366.n2.nabble.com/">http://friam.471366.n2.nabble.com/</a><o:p></o:p></p></blockquote></div></div></body></html>