[FRIAM] alternative response

Jon Zingale jonzingale at gmail.com
Wed Jun 17 14:47:41 EDT 2020


Gary,

Perhaps more interesting about Einstein's *brain* is that it may have
been able to die not *believing* in the theory it developed. This is only
possible if we can accept ontological grounds different from habit.

Glen,

You said: "Maybe this is an insight Nick is asking for?"

Nick *should* understand this. He demands that *we* makeup a council
and that it might be *our jobs* to redirect SteveG into using a different
word than god. That it might save him the trouble to play another's
game, to realize the inertia of legacy code.

Nick, et al.

Glen says: "I highlight 'as if' above because it's that truncation error
that might be overlooked. However, just because I think these interface
mismatches cause the overwhelming MAJORITY of what we might call 'free
will', it's not necessarily the case that there is no freedom somewhere
deep down. Maybe 'below Fermi', there is a tiny bit of wiggle room that
then *cascades* (purely reactively) through the system." - From page 3

Nick says: "To deny it is to deny our humanity. Well, I deny it.  In the
first place, I don't find humans to be all that special...I still have
yet to understand Glen's idea of free will..."

Ok, you make an unwavering ontological commitment, *all is effable*.
Glen begins by imagining that we limit the scope to allow for ineffable.
You act as if you are probing his thesis on his terms, but then you
home-in on the ineffability property which is entailed by scoping,
only to claim that you don't understand. I judge this as boring.

Nick says: " But I think the game metaphor fails just because actual
games can take on only those implications that we care to give them
whereas 'games' like the 'free will game' have negative consequences
both in the field of psychology and in our every day lives."

Next, you make an appeal to *flesh in the game* by becoming a staunch
defender of the universal application of our game to its consequences
with respect to moral(?) responsibility (negative consequences). Now,
I feel that you requiring responsibility to the consequences of
universal applicability as an ontological commitment. This reminds me
of when a band attempts to work out their royalty structure and where
they will stand on stage, ad infinitum before they bother to write
a song. Have no fear, of course, you will look good in sunglasses.

Further down on page 3, Glen asserts:
"And when we use the phrase 'free will' in our everyday conversation,
we're really talking about that loss, the information lost when we
truncate others or others truncate us. The existence of the lossy,
truncating collective doesn't preclude the existence of the tiny, tiny
impact randomness."

This seems like an ontological commitment. On page 4 he continues to
fend off attempts to move the grounding by emphasizing scope and the
conversation's origin. Glen says:

"I agree. I doubt it would display free will, too. But it's an
interesting question whether it would or not. It's an even more
interesting question whether it would *look* like it exhibited free
will, which is the question RussA asked."

Opening the floor to constructing models of what it could mean to
*look like* this or that. By page 5, Glen literally lays out some
kind of mesh model, talks about memory, and the path not taken. There
are again, all kinds of places to play the game. OTOH, maybe it *is*
more fun to fraggle the doozer by ignoring the model and trolling the
foundations. Personally, I have written too many bots to think about
doing that manually as being any fun.



--
Sent from: http://friam.471366.n2.nabble.com/



More information about the Friam mailing list