[FRIAM] GPT-3 and the chinese room

Alexander Rasmus alex.m.rasmus at gmail.com
Mon Jul 27 21:32:15 EDT 2020


Glen,

Gwern has an extensive post on GPT-3 poetry experimentation here:
https://www.gwern.net/GPT-3

I strongly recommend the section on the Cyberiad, where GPT-3 stands in for
Trurl's Electronic Bard: https://www.gwern.net/GPT-3#stanislaw-lems-cyberiad

There's some discussion of fine tuning input, but I think more cases where
they keep the prompt fixed and show several different outputs.

Best,
Rasmus



On Mon, Jul 27, 2020 at 6:14 PM uǝlƃ ↙↙↙ <gepropella at gmail.com> wrote:

> I think I read somewhere that width is 2048. What's that? Like a short
> paper ... half an in-depth paper? An Atlantic article, maybe? I know they
> delayed the release of GPT-2 and haven't released GPT-3 because of the
> abuse potential. But it would be very cool to prime it with a long
> expression, get the response, make a point mutation, get the response, make
> a hoist mutation, ..., steadily moving up in changes, classify the results
> and see if there are clear features in the output that are not commensurate
> with those in the inputs. Do you know of anyone reporting anything like
> that?
>
> Re: the singularity - I think it's like the big bang. It kindasorta looks
> like a singularity from way out on the flat part, but it'll always be
> locally flat. From that perspective, we're already deep into asymptopia.
>
> On 7/21/20 6:13 PM, Russell Standish wrote:
> > As I noted on the slashdot post, I was really surprised at the number
> > of trainable parameters. 175 billion. Wow! The trainable parameters in
> > an ANN is basically just the synapses, so this is actually a human
> > brain scale ANN (I think I read elsewhere this model is an ANN), as
> > the human brain is estimated to have some 100 billion synapses.
> >
> > I remember the Singulatarian guys predicting human scale AIs by 2020,
> > based on Moore's law extrapolation. In a sense they're right. Clearly,
> > it is not human scale competence yet, and probably won't be for a
> > while, but it is coming. Remember also that it also takes 20 years
> > plus to train a human-scale AI to full human-scale competence - we'll
> > see some short cuts, of course, and continuing technological
> > improvements in hardware.
> >
> > What's the likelihood of a Singularity by mid-century (30 years from
> now)?
>
> --
> ↙↙↙ uǝlƃ
>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC <http://friam.471366.n2.nabble.com/FRIAM-COMIC>
> http://friam-comic.blogspot.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200727/f7d99f1b/attachment.html>


More information about the Friam mailing list