<!DOCTYPE html><html><head><title></title><style type="text/css">p.MsoNormal,p.MsoNoSpacing{margin:0}</style></head><body><div style="font-family:Arial;">Steve Smith's use of the phrase "arms race" reminded me of John Brunner's <i>Shockwave Rider </i>and its underlying premise of the dangers of constant change, 'first the legs race, then the arms race, then the brain race'. (Brunner was inspired by Tofler's book, <i>Future Shock</i>.)<br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;">The book also poses a problem: if you have two bodies in orbit, how does one catch up or surpass the other. <i>"See you later accelerator,"</i> illustrates the perceived fallacy of these kinds of "races."<br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;">The current AI mania is akin to the brain race in Brunner, except, in the book, the race was to increase/augment human intelligence not artificial.<br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;">I wonder where the world might be if the same effort and money that has been spent on artificial intelligence had instead been invested in Englebart's effort to augment human intelligence.<br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;">davew<br></div><div style="font-family:Arial;"><br></div><div style="font-family:Arial;"><br></div><div>On Thu, Mar 30, 2023, at 11:19 AM, Steve Smith wrote:<br></div><blockquote type="cite" id="qt" style=""><p><br></p><div class="qt-moz-cite-prefix"><i>GePR</i> -<br></div><blockquote type="cite" cite="mid:dd22356e-6837-7329-e8a2-65317af1e96f@gmail.com">Well, I
"agree" with the open letter, for different reasons than Steve.
Just yesterday, a colleague (who should know better) made a
similar assertion to Nick's (and mine, and maybe Marcus' etc.)
that *we* may be in the same category as a transformer decoder
assembly. The context was whether a structure like GPT, designed
specifically so that it can add high-order Markovian token
prediction, can possibly embody/encapsulate/contain mechanistic
models.<br></blockquote><div>Can you elaborate how this is an "agreement" with the open letter?
I'm not clear what you are agreeing with or on what principle?<br></div><blockquote type="cite" cite="mid:dd22356e-6837-7329-e8a2-65317af1e96f@gmail.com"><div><br></div><div>While I don't subscribe to the fideistic write-off (or
Luddite-like) of such structures as vapid or even "non-sentient",
there *is* something we're doing they are not. I can't quite
articulate what it is we do that they don't. But I think there is.
And I think it (whatever "it" is) is being targeted by
mechanism-based (or physics-based) machine learning. <br></div><div> <br></div><div> Being either a skeptic (as I am) or a proponent (as Marcus
portrays, here), pre-emptively regulating (or attempting to
regulate) the research and training is a bad, perhaps Pyrrhic
Victory, thing to do. From a skeptical perspective, it slows our
*falsification* of transformer decoder assemblies as containers
for mechanistic reasoning. For proponents, it puts us behind
others who would continue to make progress.<br></div></blockquote><p>I do agree that when we are in an "arms race" it feels like there
is nothing to do except "run faster" and don't for the love of all
that is good, take a pause for any reason.<br></p><p>To quote Thomas Jefferson (referring to Slavery): "I think we
have a wolf by the ears, we can neither continue to hold it, nor
can we afford to let it go".<br></p><blockquote type="cite" cite="mid:dd22356e-6837-7329-e8a2-65317af1e96f@gmail.com"><div><br></div><div>So, yes, it has a feedback effect, a deleterious one.<br></div></blockquote><p>My inner-Luddite believes that we are always in spiritual/social
debt and that most if not all of our attempts to dig out with more
technology has, at best, the benefit of rearranging the shape of
the hole we are in, and generally deepening and steepening it's
profile. <br></p><p>That said, I live my life with a shovel in one hand and a digging
bar in the other, even if I've (mostly) put away the diesel
excavator, dynamite and blasting caps... I *am* homo-faber and
this is *in* my destiny, but I want to believe that I am also the
superposition of many other modes: <a href="https://en.wikipedia.org/wiki/Names_for_the_human_species" class="qt-moz-txt-link-freetext">https://en.wikipedia.org/wiki/Names_for_the_human_species</a>,
with perhaps <i>homo adaptabalis</i> most significantly? If we
do not at least consider our own self-regulation as a collective
then I think we risk degenerating to <i>homo avarus</i> or <i>homo
apathetikos.</i><br></p><p><i>-SAS</i><br></p><div>-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .<br></div><div>FRIAM Applied Complexity Group listserv<br></div><div>Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom <a href="https://bit.ly/virtualfriam">https://bit.ly/virtualfriam</a><br></div><div>to (un)subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br></div><div>FRIAM-COMIC <a href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a><br></div><div>archives: 5/2017 thru present <a href="https://redfish.com/pipermail/friam_redfish.com/">https://redfish.com/pipermail/friam_redfish.com/</a><br></div><div> 1/2003 thru 6/2021 <a href="http://friam.383.s1.nabble.com/">http://friam.383.s1.nabble.com/</a><br></div><div><br></div></blockquote><div style="font-family:Arial;"><br></div></body></html>