[FRIAM] GPT-3 and the chinese room

Russell Standish lists at hpcoders.com.au
Tue Jul 21 21:13:17 EDT 2020


As I noted on the slashdot post, I was really surprised at the number
of trainable parameters. 175 billion. Wow! The trainable parameters in
an ANN is basically just the synapses, so this is actually a human
brain scale ANN (I think I read elsewhere this model is an ANN), as
the human brain is estimated to have some 100 billion synapses.

I remember the Singulatarian guys predicting human scale AIs by 2020,
based on Moore's law extrapolation. In a sense they're right. Clearly,
it is not human scale competence yet, and probably won't be for a
while, but it is coming. Remember also that it also takes 20 years
plus to train a human-scale AI to full human-scale competence - we'll
see some short cuts, of course, and continuing technological
improvements in hardware.

What's the likelihood of a Singularity by mid-century (30 years from now)?

On Tue, Jul 21, 2020 at 01:20:31PM -0700, uǝlƃ ↙↙↙ wrote:
> Just for any old cf:
> https://analyticsindiamag.com/open-ai-gpt-3-code-generator-app-building/
> 
> Someone mentioned in a recent thread, here, the Chinese Room thought experiment, to which my reaction is always "Bah! That's nothing but a loaded question" ... like "have you stopped beating your child?" But the truth is, my answer to the Chinese Room is that it *is* intelligent. GPT-3 is nothing but the Chinese Room. Similarly, all we are is deep memory machines trained up on huge datasets. At some point, I've made the argument that the demonstration of *understanding* can't be made through language. As fond as I am of repeating back someone's expression in one's own words to demonstrate you grokked their point, *ultimately* the only demonstration of understanding that I really accept is in the *doing* or the *making* of stuff.
> 
> Now, there's some prestidigitation behind debuild.co. But at first blush, here is a machine that *understands* the website specification well enough to actually code the website. The AI skeptics will move the goalposts, of course, as they always do. E.g. they can say that programming a website to meet specs isn't a big deal, we've had declarative and domain-specific languages for awhile. And web pages and programming languages are all purely linguistic anyway. But it's a short trip from here to, say, a CNC machine, a 3D printer, a script for a light show, or even algorithmic composition of music.
> 
> I'm reminded of people who are expert at some task, like playing baseball or whatever, but when asked *how* they do what they do, they're at a loss ... tacit but no reflective understanding ... like a cat not really recognizing itself in a mirror, where dolphins do.
> 
> What's actually missing in the machines we berate as being mindless algorithms is not general intelligence or universal computation. It's general-purpose sensorimotor sytems ... universal manipulation ... hands with thumbs, tightly coupled feedback loops like our sense of touch, excruciatingly sensitive data fusion organelles like olfactory bulbs, etc. I think I can argue that's what gives us "understanding" ... not whatever internal computation we're capable of.
> 
> 
> -- 
> ↙↙↙ uǝlƃ
> 
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ 

-- 

----------------------------------------------------------------------------
Dr Russell Standish                    Phone 0425 253119 (mobile)
Principal, High Performance Coders     hpcoder at hpcoders.com.au
                      http://www.hpcoders.com.au
----------------------------------------------------------------------------



More information about the Friam mailing list