[FRIAM] Maybe a new hardware approach to deal with AI developments

Marcus Daniels marcus at snoutfarm.com
Tue Sep 19 16:54:25 EDT 2017


Glen writes:


"Making sense of the final configuration that seems to handle the I/O relation the way it "should", consists largely of studying the embedding of the configuration.  The meaning comes from the interaction with what's out there, not some decoupled internal structure."


To the extent there is compression or partitioning/expansion of the I/O relation might give a `story' with regard to what's out there.

How do progressively higher levels in a neural net selectively combine signals into mappings?   My dog isn't going to tell me how she selects an item to steal & march around with, but if I could probe neurons in her brain I might find one that fires for large but lightweight soft things like pillows, paper towels, and so on.


Marcus

________________________________
From: Friam <friam-bounces at redfish.com> on behalf of gⅼеɳ ☣ <gepropella at gmail.com>
Sent: Tuesday, September 19, 2017 2:43:55 PM
To: FriAM
Subject: Re: [FRIAM] Maybe a new hardware approach to deal with AI developments

Something like what's discussed in the nytimes article *must* obtain for computers to ever be as embedded as the human brain.  We can make an analogy that helps explain why RussA's reified ideas argument is (slightly) flawed, but satisficing for a seemingly large number of tasks.  The analogy being CPU ⇔ thoughts.  As the nytimes article points out, the centralization of the computer's "thoughts" into the CPU has taken us really far, as has (perhaps) centralization-friendly philosophy like we got from Plato.  But CPUs and the thoughts of philosophers have *never* really been disembodied.  RussA's idea (contra Hoffman, I think) that there is a strong correlation between the world and thoughts, strong enough to imply that we can share/communicate ideas, relies on the hidden assumption that the communicating processes have the same embedding (eyeballs, fingers, ears, etc. for brains and disks, GPUs, RAM, etc. for CPUs).

The shared embedding is the source of the shared semantics ... It is the reason we (are tricked into thinking we can) share ideas.  This is also true for computational infrastructure like ANNs or GAs trained on particular data or in a particular context.  Making sense of the final configuration that seems to handle the I/O relation the way it "should", consists largely of studying the embedding of the configuration.  The meaning comes from the interaction with what's out there, not some decoupled internal structure.

I think this is at least part of why QM is appealing to philosophers and vice versa, because (e.g.) entanglement is a (very particular) type of environmental coupling.  What information is closed under which operations?  And what information is sensitive to couplings under which operations?


On 09/19/2017 12:00 PM, Marcus Daniels wrote:
> [mixing threads]
>
>
> Mermin’s “Shut up and calculate” view which to me seems like agreeing to be blind because there is Braile.
>
> This to me has the same feel as agreeing that `real’ being whatever “a community of inquiry” says.    How can one generate hypothesis in a productive way without any intuition or metaphysical foundation?  Why would anyone want to?  It seems to me doing theory this way is something a computer might as well do.   I _believe_ something because I can manipulate it, visualize it, and anticipate a certain kind of result, not because it is written in a textbook or because a prediction pops out of a supercomputer.   That formality is added value to the intuition, not a substitute for it.
>
>
> Suppose (and it is not just hypothetical) that a machine learning algorithm could suggest how to design a battery with maximum capacity, develop recipes that extended life, or find computationally efficient solutions to the evolution of quantum systems, or answer any number of hard scientific questions or solve any number of relevant engineering problems.   Suppose it was completely mysterious to humans (at first) how it worked, but it worked perfectly.   The systems never failed and the predictions were always spot-on.   Has something `real’ been found?    The “Shut-up and calculate” approach seems to say yes.   Why should I prefer to read papers or textbooks describing human experiences?  Instead, perhaps find ways to unpack and rationalize the machine representations (e.g. neural nets, rule-based systems, whatever).
>
>
> Marcus
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* Friam <friam-bounces at redfish.com> on behalf of Alfredo Covaleda Vélez <alfredo at covaleda.co>
> *Sent:* Monday, September 18, 2017 8:09:01 PM
> *To:* The Friday Morning Applied Complexity Coffee Group
> *Subject:* [FRIAM] Maybe a new hardware approach to deal with AI developments
>
> Probably It is the most interesting tech article that I have read in weeks.
>
> https://mobile.nytimes.com/2017/09/16/technology/chips-off-the-old-block-computers-are-taking-design-cues-from-human-brains.html?emc=edit_th_20170917&nl=todaysheadlines&nlid=58593627&referer=


--
☣ gⅼеɳ
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/ by Dr. Strangelove
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20170919/afee346d/attachment.html>


More information about the Friam mailing list