<div dir="ltr"><div>A sciencey view of emotions and where they are located in the body:</div><div><a href="https://www.pnas.org/content/111/2/646">https://www.pnas.org/content/111/2/646</a></div><div><br></div><div>Search emotions map for visuals that show a ton of different emotions.</div><div><br></div><div> Curt<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 24, 2021 at 11:54 AM uǝlƃ ☤>$ <<a href="mailto:gepropella@gmail.com">gepropella@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I suppose my problem is that I *do* think we can find "the sadness" inside the brain ... well, not inside the brain, exactly, but inside the *body* ... well, not *inside* the body but peri-body, localized *at* the body, but extending out into the body's context a bit.<br>
<br>
Just like with one's thumb, sadness comprises a dynamic mesh of interlinked feedback loops. And that dynamic mesh of feedback is *part* of a larger mesh of such loops. (Re: "putting it in a robot" - cf types of attention: soft, hard, self, etc.) Some of those loops *look at* the sadness cluster, register that cluster as "other" in some sense ... just like how I can imagine chopping off my thumb and tossing that object into the woods.<br>
<br>
Because I do this all the time, it would blow my mind if others did not also do it. I particularly do it with fear. Wake up startled. Think maybe there's someone in the house. Grab the bat. As I'm walking up the stairs, I *reflect* on my registered, now objectified fear. And, in doing so, wiggle my way out of the grip of that fear and think more tactically about how to behave if there's a human-shaped shadow in the living room.<br>
<br>
I do the same thing with pain, particularly my chronic back pain, but also with things like stubbed toes. It's fscking hilarious how much that hurts ... ridiculously over-emphasized pain for what has happened. To not step outside the pain and laugh out loud would be weird. It's just way too funny how much that hurts.<br>
<br>
I welcome a lesson in how misguided I am, here.<br>
<br>
On 8/24/21 8:36 AM, Eric Charles wrote:<br>
> So.... This is JUST a question of whether we are having a casual conversation or a technical one, right? Certainly, in a casual, English-language conversation talk of "having" emotions is well understood, and just fine, for example "Nick is /having /a fit, just let him be." (I can't speak for other languages, but I assume there are many others where that would be true.) <br>
> <br>
> If we were, for some reason, having a technical conversation about how the/Science of //Psychology/, should use technical language, then we /might /also come to all agree that isn't the best way to talk about it. <br>
> <br>
> In any case, the risk with "have" is that it reifies whatever we are talking about. To talk about someone /having /sadness, leads naturally --- linguistically naturally --- in English --- to thinking that sadness is /a thing/ that I could find if I looked hard enough. It is why people used to think (and many, many, still do) that if we just looked hard enough at someone's brain, we would find /the sadness/ inside there, somewhere. That is why it is dangerous in a technical conversation regarding psychology, because that implication is wrong-headed in a way that repeatedly leads large swaths of the field down deep rabbit holes that they can't seem to get out of. <br>
> <br>
> On the one hand, I /have /a large ice mocha waiting for me in the fridge. On the other hand, this past summer I /had /a two-week long trip to California. One is a straightforward object, the other was an extended activity I engaged in. When the robot-designers assert that their robot "has" emotions, which do they mean? Honestly, I think they don't mean either one, it is a marketing tool, and not part of a conversation at all. As such, it does't really fit into the dichotomy above, and is trying to play one off of the other. They are using the terms "emotions and instincts" to mean something even less than whatever Tesla means when they say they have an autodrive that for sure still isn't good enough to autodrive.<br>
> <br>
> What the robot-makers mean is simply to indicate that the robot will be a bit more responsive to certain things that other models on the market, and /hopefully /that's what most consumers understand it to mean. But not all will... at least some of the people being exposed to the marketing will take it to mean that emotion has been successfully put somewhere inside the robot. (The latter is a straightforward empirical claim, and if you think I'm wrong about that, you have way too much faith in how savvy 100% of people are.) As such, the marketing should be annoying to anti-dualist psychologists, who see it buttressing /at least some/ people's tendency to jump down that rabbit hole mentioned above.<br>
> <mailto:<a href="mailto:echarles@american.edu" target="_blank">echarles@american.edu</a>><br>
<br>
-- <br>
☤>$ uǝlƃ<br>
<br>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .<br>
FRIAM Applied Complexity Group listserv<br>
Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam" rel="noreferrer" target="_blank">bit.ly/virtualfriam</a><br>
un/subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: <a href="http://friam.471366.n2.nabble.com/" rel="noreferrer" target="_blank">http://friam.471366.n2.nabble.com/</a><br>
</blockquote></div>