[FRIAM] is it possible that ...

uǝlƃ ☣ gepropella at gmail.com
Thu Feb 13 10:54:05 EST 2020


... being reasonable is a bad thing?

Nick's entreatment here [1], this article [2], Dave's assurances here [3], rule #5 here [4], the hidden spoiler effect criticism of ranked-choice voting [5], counterintuitive irrationality from appeals to rationality like [6], [7] and [8], etc.

Appeals to "enlightenment" seem flawed. You can't have a reasonable discussion with a bad faith actor. I experienced this personally when my very friendly Christianity-on-his-sleeve neighbor would consistently engage me in conversation, and insist he felt love and had empathy for everyone. But when the topic came up of the billboards the Freedom from Religion foundation put up across Portland, he literally *spat* the word "atheist". The way he said the word and the context of the sentence in which he used it were dripping with hatred. I immediately stopped him and asked why he said it that way, with such hatred. He had no idea what I was talking about. After like 30 minutes more of talking about it, forcing him to use the word over and over again, telling him again and again that while I don't call myself that, a *lot* of my *friends* do call themselves that. And if he really hates atheists, then he also hates me. Etc. He finally admitted that maybe, deep down, he does hate atheists because God hates them. Therefore it's proper to hate them. I felt like I should take a shower after that conversation.

Is being reasonable the right thing? If it doesn't work with my neighbor, after hours in several open conversations, who *could* it work with? If Bitecofer is right, and people who claim to be independent might actually be crypto-partisans, people who wear the masks of "enlightenment" and "rationality" might actually be crypto-tribalists, then what's the point of being reasonable or "utilitarian". We should all hoist the Jolly Roger and begin slitting throats.

Maybe we need the obligatory "complex adaptive system" rhetoric to argue that the feedback loop in which reasonableness has a lower rate than, say, a human lifetime? The reasonableness we display over, if we're lucky, 80 years of life only shows *effect* after, say, 150 years ... or 300 years? All those stories of brilliant, sensitive people who fought the reasonableness fight over their lives and die insane or by their own hand might be victims of a cause-effect rate mismatch? Or perhaps there's a transfinite game theoretic explanation where populations of partisans co-evolve with populations of the swayable reasonable but the cycles are invisible to the history-impaired ... or the system-impaired (those who can't think in terms of collectives)?

I suppose my agnosticism reigns and I have a moral imperative to be reasonable when the use case calls for it and slit throats when the use case calls for that. But I definitely need better metrics to summon the right tactic at the right time.



[1] http://friam.471366.n2.nabble.com/Trumps-motives-not-judiciable-because-they-are-in-his-head-tp7594411p7594416.html

[2] New research may explain the weakness of centrism and the religious left
https://www.rawstory.com/2020/02/new-research-may-explain-the-weakness-of-centrism-and-the-religious-left/

[3] http://friam.471366.n2.nabble.com/Up-and-Out-vs-Down-and-in-tp7594452p7594453.html

[4] Autocracy: Rules for Survival
https://www.nybooks.com/daily/2016/11/10/trump-election-autocracy-rules-for-survival/

[5] https://en.wikipedia.org/wiki/Instant-runoff_voting#Spoiler_effect

[6] https://heterodoxacademy.org/
[7] https://en.wikipedia.org/wiki/Dark_Enlightenment
[8] https://alfanl.com/2017/03/03/the-owning-of-scott-aaronson/

-- 
☣ uǝlƃ


More information about the Friam mailing list