[FRIAM] Moral collapse and state failure

thompnickson2 at gmail.com thompnickson2 at gmail.com
Tue Aug 10 13:41:24 EDT 2021


I wonder if the interpretable/explainable distinction maps on to the goal/function distinction  which maps on to the phenomenon/epi-phenomenon distinction which maps on the function spandrel distinction which maps on to the intension/extension distinction which .....

Nick Thompson
ThompNickSon2 at gmail.com
https://wordpress.clarku.edu/nthompson/

-----Original Message-----
From: Friam <friam-bounces at redfish.com> On Behalf Of u?l? ?>$
Sent: Tuesday, August 10, 2021 1:23 PM
To: friam at redfish.com
Subject: Re: [FRIAM] Moral collapse and state failure

Yeah, it was long. I only got through half of it during my workout this morning.

I suppose it's right to say that the normative definition of moral would exclude Trump (or people like him). But if we stuck to your idea that a particular morality be *expressible*. (FWIW, I think the extra qualifier "independently of oneself" is redundant, at least a little. Any expression has to be at least somewhat objective ... spoken word causes air vibrations, video recordings of someone talking, written documents, etc.)

So, there's a hot debate at the moment in machine learning about the different usage patterns for interpretable ML vs explainable ML, whereas "explainable" is weaker in that it doesn't give any direct access to the mechanism, only describes it somewhat ... "simulates" it. Interpretable ML is supposedly a kind of transparency so that you can see inside, have access to the actual mechanism that executes when the algorithm makes a prediction.

Targeting your idea that a moral code must be expressible, do you mean a perfect, transparent expression of the mechanism a moral actor uses? Or do you mean simulable ... such that we can build relatively high fidelity *models* of the mechanism inside the actor?

On 8/10/21 10:11 AM, Russ Abbott wrote:
> The Envy video looked like a lot of fun, but it was too long for me to sit through it.
> 
> Regarding morality, my guess is that it's not predictability that leads people to consider someone moral, it's acting according to a framework that can be expressed independently of oneself. Society-wide utilitarianism would be fine; "someone much like Trump [who] says they're an exploitative, gaming, solipsist" and then behaves in a way consistent with that description, would not be considered moral no matter how consistently their behavior simply optimized short-term personal benefits. After all, to take your own Trump example, I doubt that many people would characterize Trump as moral.

--
☤>$ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/




More information about the Friam mailing list