[FRIAM] by any means necessary

glen gepropella at gmail.com
Mon Feb 14 16:12:39 EST 2022


Nobody's singling out Neuralink. The appropriate treatment of animals is standard boilerplate for any testing facility. What I think is fascinating is how tech bros tend to de-emphasize ethics (including the ethical treatment of animals) and emphasize utopian visions of the future. I suppose this is one reason Yudkowsky and the rise of the alignment problem is interesting. It evokes the idea that Hume's Guillotine isn't as crisp as we're taught to think. It was taken on an interesting tangent in the latest H+ round table where Voss and Goertzel disagreed fairly emphatically about whether or not they (separately) have infrastructures that *cover* AGI and that continued investment in specific AI makes progress toward AGI. On the surface, it seems like standard foil between the algorithmists and the humanists (e.g. Penrose). But it goes just beyond into obPlectics and open-endedness. Asserting that the ethics of some method is somehow epi- is myopic. If Neuralink abuses more animals than Synchron, then it should be corrected. But I've heard no evidence of such abuse about Synchron. Perhaps Synchron beat Neuralink to FDA approval *because* they take their methods, including the ethics thereof, more seriously? I wouldn't know one way or another and the definitely not clickbait in Business Insider and NYP were the first I'd heard of any bad behavior at all.

On 2/14/22 12:44, Marcus Daniels wrote:
> There's a general question of how far is too far with animal testing.    These kinds of neuroprosthesis devices could help many people, so I don't see a reason to single out Neuralink.

-- 
glen
When elephants fight, it is the grass that suffers.


More information about the Friam mailing list