[FRIAM] the role of metaphor in scientific thought

Steven A Smith sasmyth at swcp.com
Fri Jun 23 14:35:55 EDT 2017


Marcus -

This is about the time when I expect Dave West to jump in with his rant 
about how broken the metaphor of "mind as computer" (or perhaps venn 
diagram) is.  Though he may not be cross-subscribed here.


Ignoring those arguments for a moment and giving over to the metaphor, 
let me offer this observation:

To the extent that the only and precise goal is to efficiently, 
unambiguously, and accurately serialize the contents of one's mind and 
transmit it to another mind which de-serializes with the goal of 
syncronizing the internal states of Bob's mind to that of Alice's, 
perhaps what you say is spot on.   A technical manual, a scientific 
paper, those might very well call for that level of precision.

Jackson_Mary_Strong_WEBSQUARE 
<https://www.finishinglinepress.com/wp-content/uploads/2017/02/Jackson_Mary_Strong_WEBSQUARE.jpg>

>From Other Tongues 
<https://www.finishinglinepress.com/product/from-other-tongues-by-mary-strong-jackson/> 
- Mary Strong Jackson

It just so happens, I am reading this poetry collection just now...

I think this all "begs the question" <grin> that Owen brought up about 
shared ontologies.    If Bob and Alice have a *precisely* shared 
ontology (and therefore lexicon?) then this is quite tractable.   If 
they do NOT share an ontology (much less lexicon) then there is likely 
(surely?) to be a mis-registration (if I'm using Glen's term correctly) 
in any such serialization/deserialization.    One might suggest that 
developing/obtaining a perfectly shared ontology is the primary goal of 
communication (coming to a common understanding?) and I think that is a 
significant part of the reason we commun(icat)e...   but I would claim 
there is also a *creative* aspect of communication which is to explore 
the differences between our ontolologies and look for interpolations 
between and extrapolations *beyond* them which *might* yield a larger, 
richer, more expressive and apprehensive ontology for *understanding the 
world as it is*.   Science is a very elaborated and formal system for 
pursuing the more observable phenomena of the world and I don't argue 
that in the phase of science where we might be buttoning down a well 
explored concept/phenomena that precision and accuracy and lack of 
ambiguity are crucial.   Thus the reserved lexicons of every scientific 
(sub)discipline.   But what explains the Tower of Babel that is Science 
as it is practiced? Is it merely sloppy thinking and language that 
causes each subfield to (mis?)use terms from other (sub)fields?  Or is 
there something more afoot?

I would contend that this is one of the things that divides Science from 
Engineering.  Engineering is generally interested in highly 
reproduceable results, while in a paradoxical sense, Science is often 
more interested in the anomalous results?

That aside, my good friend and colleague Tom Caudell (UNM) has been 
working on a book with Mike Healey (UW) for what seems like decades now, 
building up a theory (and surrounding arguements) for a Neuronal Model 
of Mind which is ultimately grounded in category theory and informed by 
neural net theory.   I am likely mis-describing this effort, but I think 
I've captured the gist.

https://www.researchgate.net/publication/241246640_Ontologies_and_Worlds_in_Category_Theory_Implications_for_Neural_Systems

In any case, I think that their level of formality is useful, but may 
miss the true nature of consciousness and importantly creativity.

Just SAyin,

  - Steve


On 6/23/17 10:39 AM, Marcus Daniels wrote:
> < Your comparison of "closure" to Nick's idea of "surplus"
> (intentional or not) meaning.  I accept that in programming a computer,
> "closure" is a useful tool, to avoid unintended "side effects".>
>
> If one thinks of the mind of two people as two circles in a Venn diagram and the intersection as their communication, meaning is still in reference to each complete circle; it is subjective.  This may often lead to ambiguity and contradiction, but doesn't mean that language itself should be inherently ambiguous.   Specifically, a closure would imply that while each agent was bringing to bear their experience on the interpretation of the communication,  to the extent their mind is in flux from that communication, in a functional programming approach it would be modeled as transactions within each agent.   It's simply a question of being precise about what is going on.
>
> Marcus
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20170623/7ccc44ed/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Jackson_Mary_Strong_WEBSQUARE-600x600.jpg
Type: image/jpeg
Size: 61646 bytes
Desc: not available
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20170623/7ccc44ed/attachment-0001.jpg>


More information about the Friam mailing list