[FRIAM] A question for tomorrow

Nick Thompson nickthompson at earthlink.net
Sat Apr 27 13:48:15 EDT 2019


So, Lee, you ask: 

 

So, Nick, why are you asking what Turing machines think, instead of what
modern computers think?  (Be careful how you answer that...)

 

So, I am trying to think like an honest monist.  It seems to me that a
Turing Machine is a monist event processing system.  All you got is marks on
the tape, right? 

 

Nick 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

 

-----Original Message-----
From: Friam [mailto:friam-bounces at redfish.com] On Behalf Of
lrudolph at meganet.net
Sent: Saturday, April 27, 2019 10:45 AM
To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
Subject: Re: [FRIAM] A question for tomorrow

 

Frank writes:

> I would hate to have to demonstrate that a modern computer is an 

> instance of a Turing Machine.  Among other things they usually have 

> multiple processors as well as memory hierarchies.  But I suppose it 

> could be done, theoretically.

 

First a passage from a chapter I contributed to a book edited by a graduate
student Nick knows (Zack Beckstead); I have cut out a bit in the middle
which aims at a different point not under consideration here.

===begin===

If talk of "machines" in the context of the human sciences seems out of
place, note that Turing (1936) actually introduces his "automatic machine"

as a formalization (thoroughly mathematical, though described in suggestive
mechanistic terms like "tape" and "scanning") of "an idealized

*human* calculating agent" (Soare, 1996, p. 291; italics in the original),
called by Turing a "computer". [...] As Turing remarks, "It is always
possible for the computer to break off from his work, to go away and forget
all about it, and later to come back and go on with it" (1936, p.

253). It seems to me that then it must also be "always possible for the
computer to break off" and never "come back" (in fact, this often happens in
the lives, and invariably upon the deaths, of non-idealized human
calculating agents).

===end===

Of course Turing's idealization of "an idealized *human* calculating agent"
also idealizes away the fact that human computers sometimes make errors. A
Turing machine doesn't make errors.  But both the processors and the memory
of a modern computer can, and *must* make errors (however rarely, and
however good the error-detection).  To at least that extent, then, they are
not *perfect* instantiations of Turing machines.  On the other hand, that
very fact about them makes them (in some sense) *more* like (actual) human
calculating agents.

 

So, Nick, why are you asking what Turing machines think, instead of what
modern computers think?  (Be careful how you answer that...)

 

 

============================================================

FRIAM Applied Complexity Group listserv

Meets Fridays 9a-11:30 at cafe at St. John's College to unsubscribe
<http://redfish.com/mailman/listinfo/friam_redfish.com>
http://redfish.com/mailman/listinfo/friam_redfish.com

archives back to 2003:  <http://friam.471366.n2.nabble.com/>
http://friam.471366.n2.nabble.com/

FRIAM-COMIC  <http://friam-comic.blogspot.com/>
http://friam-comic.blogspot.com/ by Dr. Strangelove

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20190427/89828e50/attachment.html>


More information about the Friam mailing list