[FRIAM] A question for tomorrow

Russ Abbott russ.abbott at gmail.com
Sat Apr 27 12:58:23 EDT 2019


Nick,

One of the most attractive things about your posts is how charming they
are. They are so well written! Thank you for keeping the discussion at such
a civilized and enjoyable level -- even when I don't agree with you.

-- Russ Abbott
Professor, Computer Science
California State University, Los Angeles


On Sat, Apr 27, 2019 at 9:44 AM <lrudolph at meganet.net> wrote:

> Frank writes:
> > I would hate to have to demonstrate that a modern computer is an instance
> > of a Turing Machine.  Among other things they usually have multiple
> > processors as well as memory hierarchies.  But I suppose it could be
> done,
> > theoretically.
>
> First a passage from a chapter I contributed to a book edited by a
> graduate student Nick knows (Zack Beckstead); I have cut out a bit in the
> middle which aims at a different point not under consideration here.
> ===begin===
> If talk of “machines” in the context of the human sciences seems out of
> place, note that Turing (1936) actually introduces his “automatic machine”
> as a formalization (thoroughly mathematical, though described in
> suggestive mechanistic terms like “tape” and “scanning”) of “an idealized
> *human* calculating agent” (Soare, 1996, p. 291; italics in the original),
> called by Turing a “computer”. [...] As Turing remarks, “It is always
> possible for the computer to break off from his work, to go away and
> forget all about it, and later to come back and go on with it” (1936, p.
> 253). It seems to me that then it must also be “always possible for the
> computer to break off” and never “come back” (in fact, this often happens
> in the lives, and invariably upon the deaths, of non-idealized human
> calculating agents).
> ===end===
> Of course Turing's idealization of "an idealized *human* calculating
> agent" also idealizes away the fact that human computers sometimes make
> errors. A Turing machine doesn't make errors.  But both the processors and
> the memory of a modern computer can, and *must* make errors (however
> rarely, and however good the error-detection).  To at least that extent,
> then, they are not *perfect* instantiations of Turing machines.  On the
> other hand, that very fact about them makes them (in some sense) *more*
> like (actual) human calculating agents.
>
> So, Nick, why are you asking what Turing machines think, instead of what
> modern computers think?  (Be careful how you answer that...)
>
>
> ============================================================
> FRIAM Applied Complexity Group listserv
> Meets Fridays 9a-11:30 at cafe at St. John's College
> to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives back to 2003: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20190427/c0f3d2ea/attachment.html>


More information about the Friam mailing list