[FRIAM] Free Will in the Atlantic

Steve Smith sasmyth at swcp.com
Mon Apr 5 12:42:29 EDT 2021


Glen  wrote:
> Until we can measure the analog (robot/computer) in the same way we can measure the referent (people), e.g. by asking them whether they feel they have free will, we'll be comparing apples to oranges.

And I believe this folds us back into the discussion about Dancing
Robots.    We might do motion-studies on the Dancing Robots and discover
that they are *better* dancers in the sense of more faithfully following
the (implied or explicit) choreography than humans would/might/could,
and in fact *that* would suggest a *lack* of Free Will.   In that sense
robots have *too much* rhythm (or more precisely, their rhythm is too
precise?).   Of course, clever programmers can then add back in some
noise to their precision, and even build a model of the variations in
human dance-moves and *induce* that level of variation in the robot's
move.  We might even add a model of syncopation to make it algorithmic
rather than statistical.   At some point, diminishing returns on
"careful scrutiny" cause us to give them a pass on a "Turing Test" for
robot-dancing.

Measured over an ensemble of kids on American Bandstand dancing to
"Twist and Shout!"  in the 1960s we might have to add not just a model
of syncopation but a model of the emotional states *driving* that
syncopation, including the particular existential-angst experienced by
teens growing up in that post-war Boom/Duck-n-Cover era.   And would it
be complete without including models of children of Holocaust Survivors
and crypto-Nazis living in the US?   Recursion ad nauseum, ad infinitum.

I don't see how adding quantum superposition and wave function collapse
makes any of this easier?  <snark> Maybe we could ask a Penrose
Chatterbot? <snark>

- Steve

>
>
> On 4/5/21 8:43 AM, Marcus Daniels wrote:
>> Glen writes:
>>
>> "Instantiating artifacts that exhibit the markers for an interoceptive sense of agency ("free will") is obviously difficult."
>>
>> I don't see how agency itself is particularly hard.  Some executive process needs to model and predict an environment, and it needs to interact with that environment.   Is it hard in a way different from making an adaptable robot?   Waymo, Tesla, and others are doing this.



More information about the Friam mailing list