[FRIAM] Free Will in the Atlantic

Marcus Daniels marcus at snoutfarm.com
Mon Apr 5 11:43:55 EDT 2021


Glen writes:

"Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."

I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.

Marcus

-----Original Message-----
From: Friam <friam-bounces at redfish.com> On Behalf Of u?l? ???
Sent: Monday, April 5, 2021 7:13 AM
To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
Subject: Re: [FRIAM] Free Will in the Atlantic

On the Turing Completeness of Modern Neural Network Architectures
https://arxiv.org/abs/1901.03429

I'm going to try to fold the 2 sub-threads of scrutability and metaphysics together. It seems like universal computation is a relatively noncontroversial metaphysical commitment. At first, given the above paper on demonstrating the universality of the Transformer under bounded memory, I intended to post that there's nothing inscrutable about the architecture of self-attention loops. If by "inscrutable", we mean *invulnerable* to scrutiny, then architectures like GPT3 are certainly not inscrutable. But we have to acknowledge that explainable/interpretable AI is a serious domain. 

Anyway, the metaphysical commitments seep in at Church-Turing, I think. It's easy to lob accusations at, say, Roger Penrose for making a speculative argument that humans may be able to do things computers can't do. But I see both sides as making *useful* metaphysical commitments. One side has faith that our current formal systems will eventually reason over biological structures like the brain as *well* as they can reason over artifacts like the Transformer. The other side has faith that biological structures lie outside the formal systems we currently have available.

The important thing is to see the 2 as working on the same problem, the instantiation of formal systems that can (or can't) be shown to do the same work as the things we see in the world. A corollary is that those of us who skip to their faithful end and don't do the work (or show their work) it takes to get there are *not* working on the same problem. Progress doesn't require agnosticism. But those who lob their faith-based claims over the wall and wash their hands as if the work's all been done are either merely distractions or debilitating lesions that need to be scraped away so healthy tissue can replace them.

Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult. And to write it off one way or the other isn't helpful. I don't claim to understand the paper [⛧]. But to me, in my ignorant skimming, the most interesting part of Pérez et al is the requirement for hard over soft attention. Their argument about the irrationality of various soft attention functions is straightforward ... I suppose ... to people smarter than me. But using hard attention implies something like selection, choice, scoping, distinction, ... differences in kind/category ... even if it may be done in a covering/enumerative way. That "choosing" reminds me of the axiom of choice and the law of the excluded middle, which are crucial distinctions for the formal systems one might use to model thought. It also rings a little bell in my head about the specialness of biology. Our "averaging" methods work in an 80/20 way in medicine. But as far as those methods have taken us, we still don't have solid theories for precision medicine. And these persnickety little constructs in the formal system (which may or may not have analogs in the - ultimate - referent system) are deep in the weeds compared to glossing concepts like Bayesianism.


[⛧] And I can't figure out if it's been published or peer reviewed anywhere. I don't think so, which means it could be flawed. But it's less important to me whether their argument is ultimately valid than it is to trace the character, style, tools of the argument.


On April 2, 2021 2:51:08 PM PDT, Marcus Daniels <marcus at snoutfarm.com> wrote:
>Are there experiments one could conduct to say whether a metaphysics 
>was plausible or not?  If nothing is falsifiable, then we are again in 
>the realm of faith.
>If one starts out selecting a metaphysics to justify some action or 
>belief, this is also not helpful to clear communication or analysis.
>We can select rules of the game that are the least controversial with
>the most empirical evidence supporting them.   This is not a failure of
>imagination, this is fair play.
>
>From: Friam <friam-bounces at redfish.com> On Behalf Of Pieter Steenekamp
>Sent: Friday, April 2, 2021 1:25 PM
>To: The Friday Morning Applied Complexity Coffee Group 
><friam at redfish.com>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I agree fully. If something is inscrutable it might exhibit free will.
>But what happens in our brains is certainly scrutable. Maybe not yet 
>with current technology, but how can it be inscrutable in principle? In 
>principle we know that neurons are firing and communicate with other 
>neurons using synapse. Just look how far deep learning has come. Okay, 
>not yet compared to the human brain, but progress is made almost by the 
>day. Like the example I mentioned above, AlphGo that came up with 
>creative moves that stunned all Go experts. My point is that deep 
>learning was inspired by the structure of the brain and is showing 
>behavior similar to the brain's. Using David Deutsch's ideas as in 
>Beginning of Infinity that science makes progress by good explanations.
>The explanation that the brain is  scrutable meets Deutsch's criteria 
>for a good explanation. What's the alternative? That there is some sort 
>of ghost giving us free will? No, that's not a good explanation.
>
>On Fri, 2 Apr 2021 at 21:53, Marcus Daniels 
><marcus at snoutfarm.com<mailto:marcus at snoutfarm.com>> wrote:
>In what acceptable scenario is the behavior not describable in
>principle?    The scenario that comes to mind is in the non-science
>magical thinking scenario.
>I doubt that Tesla navigation systems are written in a purely 
>functional language, but surely there is more to this condition than 
>whether I have access to that source code and can send you the million 
>lines in purely functional form?  If something is inscrutable, it might 
>exhibit free will?
>
>-----Original Message-----
>From: Friam
><friam-bounces at redfish.com<mailto:friam-bounces at redfish.com>> On Behalf 
>Of jon zingale
>Sent: Friday, April 2, 2021 12:26 PM
>To: friam at redfish.com<mailto:friam at redfish.com>
>Subject: Re: [FRIAM] Free Will in the Atlantic
>
>I would say no if you can provide me the function.
>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


More information about the Friam mailing list