[FRIAM] selective optimism

glen gepropella at gmail.com
Tue May 9 11:39:55 EDT 2023


IDK. I still haven't read the Dawn of Everything ... or much of anything from that domain at all. But this article tweaked me:

Revealed: modern humans needed three tries – and 12,000 years – to colonise Europe
https://www.theguardian.com/science/2023/may/07/revealed-modern-humans-needed-three-tries-and-12000-years-to-colonise-europe

Dave's sentiment, here, seems anti-human to me, maybe even anti-biology. My two usual whipping posts are "To Engineer is Human" <https://bookshop.org/p/books/to-engineer-is-human-the-role-of-failure-in-successful-design-henry-petroski/6705174?ean=9780679734161> and "The Extended Mind" <https://academic.oup.com/analysis/article-abstract/58/1/7/153111?redirectedFrom=fulltext&login=false>. (To my lefty friends who separate humans from animals, I often try to use "The Extended Phenotype" for the same basic rhetoric.)

This encapsulation of agency inside the skin (known as liberalism, classical or otherwise) is delusional. We *are* our tools and our tools are us. And not merely as duals, but an interwoven, dynamic, plectic, heterarchy. To detangle us into AI vs human is some kind of debilitating category error. Depending on your perspective at the time, your reductive, abstracting powers will separate any two clumps from the ambience and you'll register an asymmetry between those registered clumps. Two seconds later, you may re-register and see reciprocity. Etc.

But one thing's for sure, as we ossify into old age, whatever re-registration we last experienced is *more likely* to stick and be the one we're convicted to for the rest of our days. Our ability to flip from one preemptive registration to another fades, no matter how intensely we dose our 5ht2ars. And the only progress we make is through the death of the skin sacks (and their ossified concepts) that came before. Post-humanism is also loaded with new age nonsense and a bit of a false dichotomy. But if one generation considers itself "human", the next generation is post-human. And just like the current kids facility with TikTok, the next round of kids will be facile with LLMs. And those kids will have red, gray, and blue teams for their games just like their ancestors did for the older games.

On 5/9/23 06:50, Prof David West wrote:
> The opinion of an "advanced layman."
> 
> I claim the status because my Computer Science MS was in AI. My first professional publication was in /AI Magazine/, then the journal of record for the discipline. I have appeared on panels with Herbert Simon, Marvin Minsky, and Herbert Dreyfus at AI conferences. I taught AI courses at the University of New Mexico circa 2009. I have observed the field more or less continuously, but as an interested observer—not expert and certainly not practitioner.
> 
> I have always been a critic! >From the time that Simon and Newel claimed that they had "created an artificial intelligence," because it successfully mimicked the way that university professors claimed to think, to the present day. I am convinced that advocates of AI and claimants with regard its power and potential (and threat) base ground their assertions in an "equivalence" between their work and a debased and limited model of human intelligence.
> 
> The only danger that _will_ (and I use the definite will not the potential maybe) result from widespread AI is that "the masses" will believe the hype and come to believe that they, as humans, are inferior in every way to machines. I believe that political and economic elites will exploit this denigration of the human in order to consolidate their power (they already have the wealth). To me, this is nothing more than an acceleration of a 75 year trend to use the educational system to produce graduates that are compliant and gullible rather than informed and intelligent—the latter, obviously, being dangerous to the social order.
> 
> As a species we have, collectively, created gods, forgot how and why we did so, then worshiped then as Gods—vastly and inevitably superior beings. AI is just godmaking 2.0
> 
> davew
> 
> On Tue, May 9, 2023, at 1:34 AM, Tom Johnson wrote:
>> It doesn't have to be either/or. I suspect most likely a mix of the two will evolve as is the case with the whole Digital Revolution.
>> TJ
>>
>> =======================
>> Tom Johnson
>> Inst. for Analytic Journalism
>> Santa Fe, New Mexico
>> 505-577-6482
>> =======================
>>
>> On Mon, May 8, 2023, 9:43 PM Pieter Steenekamp <pieters at randcontrols.co.za <mailto:pieters at randcontrols.co.za>> wrote:
>>
>>     People have different ideas about AI. Naomi Klein thinks that the idea that AI will solve all our problems is a big joke. She thinks the tech people are trying to trick us! She thinks AI is not just a tool but also a creation of the people who made it. Naomi is afraid that if we keep believing in this lie, we won't fix the real problems we have.
>>
>>     On the other hand, Sam Altman is excited about AI! He thinks AI can help us solve things like diseases and climate change, and even drive us around and cook for us! He doesn't think AI will take over the world or hurt people. Sam thinks humans will always be in charge of AI.
>>
>>     So, who's right? I don't know! My magic ball's batteries are dead, so I can't tell you. But I guess we'll have to wait and see what happens!
>>
>>     On Mon, 8 May 2023 at 23:42, Marcus Daniels <marcus at snoutfarm.com <mailto:marcus at snoutfarm.com>> wrote:
>>
>>         He's not lying, he is running his softmax function at a higher temperature to collect more samples in the vicinity of the truth.
>>
>>         > On May 8, 2023, at 12:50 PM, glen <gepropella at gmail.com <mailto:gepropella at gmail.com>> wrote:
>>         >
>>         > AI machines aren’t ‘hallucinating’. But their makers are.
>>         > https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein <https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein>
>>         >> Is all of this overly dramatic? A stuffy and reflexive resistance to exciting innovation? Why expect the worse? Altman reassures us: “Nobody wants to destroy the world.” Perhaps not. But as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world’s life-support systems, so long as they can keep making record profits that they believe will protect them and their families from the worst effects. Altman, like many creatures of Silicon Valley, is himself a prepper: back in 2016, he boasted: “I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”
>>         >> I’m pretty sure those facts say a lot more about what Altman actually believes about the future he is helping unleash than whatever flowery hallucinations he is choosing to share in press interviews.
>>         >

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list