[FRIAM] the arc of ai (was Re: Whew!)

glen ☣ gepropella at gmail.com
Thu May 4 12:07:42 EDT 2017


Though definitely less scifi than it used to be, Ted's prediction is clearly wrong in the most important part: increasing specialization.  The distinction missing (here -- I haven't read the manifesto) is the distinction between special vs general intelligence.  GI is still mysterious and we're a long way from any GAI.  And what passes for SAI like Watson and AlphaGo are still not very much like, say, the nerd who wins all the trivia nights or a professional Go player who also happens to be married.  The idea that humans will become the specialized intelligences and the system (bureaucracy and/or robots) will become the general intelligence(s) is proving false.

But the moral and ethical part of what he's saying is coming true.  And the ability of corporations to pit us against each other is _clearly_ true, as evidenced in the thread. 8^)  Of course, part of our _elite_ superpower is our ability to tease apart complex structures (like Ted, Coca-Cola, and the etiology of diabetes).  So, small disagreement clearly would never disable our ability to recognize and work towards larger agreement.

Tangent re: anarchists -- my usual argument that those who call themselves "anarchist" must "police themselves" in the same way actual Christians (who hold Jesus' sayings highest) should police the "Christians", won't work.  Or maybe it will.  These "thoughtful anarchists" Merle talks about have a duty to distinguish their beliefs and actions from, eg, the black bloc (https://en.wikipedia.org/wiki/Black_bloc).  Or, maybe not.  Merle, do these thoughtful anarchists support smashing windows and burning piles of trash in the middle of the city?  Or are such behaviors limited only to "thoughtless anarchists"?  (No judgement on my part either way.  I'm sympathetic to the idea that we sometimes have to go through a trough to reach a peak.  A little disordered heat is sometimes necessary to obtain a stronger crystal.)


On 05/03/2017 07:34 PM, Marcus Daniels wrote:
> “175. But suppose now that the computer scientists do not succeed in developing artificial intelligence, so that human work remains necessary. Even so, machines will take care of more and more of the simpler tasks so that there will be an increasing surplus of human workers at the lower levels of ability. (We see this happening already. There are many people who find it difficult or impossible to get work, because for intellectual or psychological reasons they cannot acquire the level of training necessary to make themselves useful in the present system.) On those who are employed, ever-increasing demands will be placed: They will need more and more training, more and more ability, and will have to be ever more reliable, conforming and docile, because they will be more and more like cells of a giant organism. Their tasks will be increasingly specialized, so that their work will be, in a sense, out of touch with the real world, being concentrated on one tiny slice of reality.
> The system will have to use any means that it can, whether psychological or biological, to engineer people to be docile, to have the abilities that the system requires and to "sublimate" their drive for power into some specialized task. But the statement that the people of such a society will have to be docile may require qualification. The society may find competitiveness useful, provided that ways are found of directing competitiveness into channels that serve the needs of the system. We can imagine a future society in which there is endless competition for positions of prestige and power. But no more than a very few people will ever reach the top, where the only real power is (see end of paragraph 163). Very repellent is a society in which a person can satisfy his need for power only by pushing large numbers of other people out of the way and depriving them of THEIR opportunity for power.”

-- 
☣ glen


More information about the Friam mailing list