[FRIAM] GPT-3 and the chinese room

uǝlƃ ↙↙↙ gepropella at gmail.com
Mon Jul 27 20:14:44 EDT 2020


I think I read somewhere that width is 2048. What's that? Like a short paper ... half an in-depth paper? An Atlantic article, maybe? I know they delayed the release of GPT-2 and haven't released GPT-3 because of the abuse potential. But it would be very cool to prime it with a long expression, get the response, make a point mutation, get the response, make a hoist mutation, ..., steadily moving up in changes, classify the results and see if there are clear features in the output that are not commensurate with those in the inputs. Do you know of anyone reporting anything like that?

Re: the singularity - I think it's like the big bang. It kindasorta looks like a singularity from way out on the flat part, but it'll always be locally flat. From that perspective, we're already deep into asymptopia.

On 7/21/20 6:13 PM, Russell Standish wrote:
> As I noted on the slashdot post, I was really surprised at the number
> of trainable parameters. 175 billion. Wow! The trainable parameters in
> an ANN is basically just the synapses, so this is actually a human
> brain scale ANN (I think I read elsewhere this model is an ANN), as
> the human brain is estimated to have some 100 billion synapses.
> 
> I remember the Singulatarian guys predicting human scale AIs by 2020,
> based on Moore's law extrapolation. In a sense they're right. Clearly,
> it is not human scale competence yet, and probably won't be for a
> while, but it is coming. Remember also that it also takes 20 years
> plus to train a human-scale AI to full human-scale competence - we'll
> see some short cuts, of course, and continuing technological
> improvements in hardware.
> 
> What's the likelihood of a Singularity by mid-century (30 years from now)?

-- 
↙↙↙ uǝlƃ



More information about the Friam mailing list