[FRIAM] emergent mind - ai news by ai

glen gepropella at gmail.com
Wed Mar 29 17:16:48 EDT 2023


It's ridiculous. Suddenly, I feel more akin to that Chinese guy who GE'd some babies ... or the biohackers growing glowing dogs in their shed. You can't control people with open letters and calls to "good behavior". Maybe had they not included the "automate away all jobs" hype, I'd have a bit more sympathy ... or maybe if people like Musk, who concretely, literally, is directly responsible for job losses across a constellation of domains had not signed the stupid thing. Tu quoque, I guess.

The way we govern such things is with legal accountability. How will we punish Microsoft, who is clearly a person in the eyes of our law? Can we throw Microsoft in jail for subsidizing AI training? Fine them to corporate "death"? Pfft. And even if we can, could we punish companies in China or Qatar? No accountability implies no "moratorium". Smart people can be so stupid.

I like Volokh's recent post on Large Libel Models: https://reason.com/volokh/2023/03/29/knowing-reckless-falsehood-theories-in-large-libel-models-lawsuits-against-ai-companies/

Directly akin to Jochen's post awhile back showing gpt's [ahem] hallucinations about FriAM participants.

On 3/29/23 11:13, Steve Smith wrote:
> Has anyone (else) read the "Pause AI" open-letter:https://futureoflife.org/open-letter/pause-giant-ai-experiments/?ref=emergentmind ?
> 

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list