[FRIAM] WAS: ∄ meaning, only text: IS: Simulations as constructed metaphors

thompnickson2 at gmail.com thompnickson2 at gmail.com
Mon Aug 10 14:20:23 EDT 2020


Dear Frennemies,

 

I have had my ears boxed so often for dragging threads into my metaphor den, that I thought I ought to rethread this.  But the paper Glen posts and Russ applauds posts is really interesting, describing the manner in which implicit assumptions into our AI can lead it wildly astray: “There’s more than one way to [see] a cat.” 

 

The article had an additional lesson for me.  To the extent that you-folks will permit me to think of simulations as contrived metaphors, as opposed to Natural metaphors – ie., objects that are built solely for the purpose of being metaphors, as opposed to objects that are found in the world and appropriated for that purpose, then that reminds me of a book by Evelyn Fox Keller which argues that a model (i.e., a scientific metaphor) can only be useful if it is  more easily understood than the thing it models.  Don’t use chimpanzees as models if you are interested in mice.   

 

Simulations would seem to me to have the same obligation.  If you write a simulation of a process that you don’t understand any better than the thing you are simulating, then you have gotten nowhere, right?  So If you are publishing papers in which you investigate what your AI is doing, has not the contrivance process gone astray? 

 

What further interested me about these models that the AI provided was that they were in part natural and in part contrived.  So the contrived part is where the investigators mimicked the hierarchical construction of the visual system in setting up the AI; the natural part is the focus on texture by the resulting simulation.  So, in the end, the metaphor generated by the AI turned out to be a bad one – heuristic, perhaps, but not apt.

 

Nicholas Thompson

Emeritus Professor of Ethology and Psychology

Clark University

 <mailto:ThompNickSon2 at gmail.com> ThompNickSon2 at gmail.com

 <https://wordpress.clarku.edu/nthompson/> https://wordpress.clarku.edu/nthompson/

 

 

From: Friam <friam-bounces at redfish.com> On Behalf Of Russ Abbott
Sent: Monday, August 10, 2020 11:04 AM
To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
Subject: Re: [FRIAM] ∄ meaning, only text

 

Independent of Kavanaugh, that was a great article. That's the first I have heard of this work. It begins to explain a lot about deep learning and its literal and figurative superficiality.

 

-- Russ Abbott                                       
Professor, Computer Science
California State University, Los Angeles

 

 

On Mon, Aug 10, 2020 at 7:02 AM uǝlƃ ↙↙↙ <gepropella at gmail.com <mailto:gepropella at gmail.com> > wrote:

And to round out another thread, wherein I proposed Brett Kavanaugh *is* Artificial Intelligence, this article pops up:

  Where We See Shapes, AI Sees Textures
  Jordana Cepelewicz
  https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/

In the context of "originalism" and reading *through* the text, the question is: Why does Brett *seem* intelligent [‽] in a different way than your average zero-shot AI? I like Nick's argument that meaning is higher-order pattern. The results Cepelewicz cites validate that argument [⸘]. But if we continue, we'll fall back into the argument about high-order Markovity, free will, and steganographic [de]coding. And (worse) it dovetails with No Free Lunch and whether strict potentialists are well-justified in using higher order operators. Multi-objective constraint solving (aka parallax) seems to cut a compromise through the whole meta-thread. But, as always, the tricks lie in composition and modularity. How do the constraints compose? Which problems can be teased apart from which other problems to create cliques in the graph or even repurposable anatomical modules? How do we construct structured memory for saving snapshots of swapped out partial solutions? Etc.


[‽] If you can't tell, I'm really enjoying using a frat boy political operative who *pretends* to be a SCOTUS justice in the argument for strong AI. To use an actual justice like Gorsuch as such just isn't satisfying.

[⸘] Of course, we don't learn from confirmation. We only learn from critical objection. And the 2nd half of the article does that well enough, I think.

-- 
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam <http://bit.ly/virtualfriam> 
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/ <http://friam.471366.n2.nabble.com/FRIAM-COMIC> 
FRIAM-COMIC http://friam-comic.blogspot.com/ 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200810/cb4659fa/attachment.html>


More information about the Friam mailing list