<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<blockquote type="cite"
cite="mid:8fa7bab3-7fde-be0d-503f-d91453767247@ropella.name">
<pre class="moz-quote-pre" wrap="">Perhaps coincidentally ... or maybe cause I'm triggered ...:
Do Atoms Ever Touch?
<a class="moz-txt-link-freetext" href="https://youtu.be/P0TNJrTlbBQ">https://youtu.be/P0TNJrTlbBQ</a>
They go 'round and 'round about the definitions and *finally* arrive at the conclusion that "the analogy breaks down". So, the answer to "are emotional states hidden" is "no", but not because they are or are not hidden (by whatever definition of "hidden" you may choose), but because the question is NONSENSE! >8^) So, for all you people who think metaphor is so fundamental ... does this confirm or contradict your bias?
</pre>
</blockquote>
<p>Metaphorist trolling much <grin>?<br>
</p>
<p>It confirms *my* bias, but I think your conception of "metaphor
as fundamental" might be different than my own. I also may be
guilty of "moving the goalposts". I've been studying Category
Theory for possible formalisms suitable for a sort of universal
abstraction of structure mappings between different formal
structures (e.g. sets, topological spaces, vector spaces, posets,
manifolds) and the kinds of structures found in complex (layered)
metaphors. <br>
</p>
<p>Just like "scientific theories" or "models" or "maps",
*metaphors are always wrong*, some alternately more/less
useful/wrong than others. The relation "apt" comes to mind.</p>
<p>I would NOT claim that reality is structured by
metaphors/analogies/ontologies/models/theories, but rather that
our *language* and formal understanding is structured in that
way. <br>
</p>
<p>This maybe ties in to the parallel thread which Nick so aptly
dubbed "experience beyond experience". To the extent that there
is no way to structure what we "know", tying it to what we have
"sensed", we have "beyond experience"?</p>
<p>It also maybe ties into deep machine learning. IMO machine
learning excels at predictive power while being virtually void of
explanatory power. The holy grail of neural nets/machine
learning/learning classifier systems is to analyze the artifact
resulting from training well enough to provide explanatory
power. <br>
</p>
<p>FWIW, a few of the resources I've found that provide interesting
(far from conclusive) insight into all of this are:</p>
<ol>
<li>Big Picture suggesting how/why CT might inform or relate to
MT- <a
href="https://www.math3ma.com/blog/what-is-category-theory-anyway">https://www.math3ma.com/blog/what-is-category-theory-anyway</a></li>
<li>Not very deep or even convincing but broadly referential to
the topic - <a
href="http://bohemiantheory.blogspot.com/2014/03/metaphorically-category-theory-is-study.html">http://bohemiantheory.blogspot.com/2014/03/metaphorically-category-theory-is-study.html</a></li>
<li>Meaning, Metaphors, and Morphisms: Theory of Intermediate
Natural Transformations - <a
href="https://arxiv.org/pdf/1801.10542.pdf">https://arxiv.org/pdf/1801.10542.pdf</a></li>
</ol>
<p>I don't know that anyone else here is motivated to dig into any
of these, but some parallax would be helpful. The last one is the
most formal/thorough/promising but does have the added challenge
of a (rough) translation from Japanese.<br>
</p>
<p>- Steve<br>
</p>
<p><br>
</p>
<br>
</body>
</html>