[FRIAM] it's boring

glen gepropella at gmail.com
Mon Jun 20 12:55:05 EDT 2022


Yeah, that's the germ of an actually profound observation about us vs them. "What it's like" to be some thing is nothing more than a strong analogy, both behavioral and structural. If you have all the same parts of another thing, and you have all the same behaviors as another thing, then you experience "what it's like" to be that other thing. True of computers. True of Trump.

An MD Psych therapist friend of mine divides people into 2 groups: 1) the splitters, who incessantly parse, and 2) the smooshers, who incessantly integrate. A splitter with a slightly different database of grievances would make great hay of their tiny differences with Trump (maybe Liz Cheney?). A smoosher would ignore any small differences between themselves and Trump (maybe Kevin McCarthy?).

On 6/20/22 09:35, Barry MacKichan wrote:
> This prompts me to propose, with tongue slightly in cheek, the /Weak Turing Test/, which consists of a remote observer trying to distinguish between a computer and Donald Trump. It doesn’t require any deep analysis on the part of the computer, just a database of grievances.
> 
> —Barry
> 
> On 20 Jun 2022, at 11:45, glen wrote:
> 
>     "None of this has anything to do with artificial consciousness, of course. “I don’t anthropomorphize,” Chowdhery said bluntly. “We are simply predicting language.” Artificial consciousness is a remote dream that remains firmly entrenched in science fiction, because we have no idea what human consciousness is; there is no functioning falsifiable thesis of consciousness, just a bunch of vague notions. And if there is no way to test for consciousness, there is no way to program it. You can ask an algorithm to do only what you tell it to do. All that we can come up with to compare machines with humans are little games, such as Turing’s imitation game, that ultimately prove nothing."
> 
> 
>     What amazes me is that few take the logical step of suggesting that there is no such thing as consciousness. We're all "simply predicting language." The only difference between an animal and a language predicting chatbot is that we *also* "simply predict" actions in 4D spacetime. Any disconnect between the serial prediction of tokens from the (not-so-serial) prediction of all actions is that dimension reduction.
> 
> 
>     On 6/20/22 08:15, Marcus Daniels wrote:
> 
>         https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/ <https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/> <https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/ <https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/>>
> 

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list