[FRIAM] on government

Prof David West profwest at fastmail.fm
Fri Aug 30 10:41:26 EDT 2024


I agree with glen with one addition: all "technology" operates within a complex system with a characteristic of 'seeking' equilibrium. Change is resisted, and unless the change is really significant by some measure, the system will return to its original equilibrium point or something 'nearby'.

I remember sitting around St. Johns after Trump won the first time and listening to people predict, with apparent sincerity and genuine fear, that the apocalypse would ensue shortly after Jan 20. I see the same fear mongering this year—we are doomed if Trump should win again.

Stupid, in my opinion, both times. One man cannot exert sufficient force to move the system very far from its equilibrium point.

The real threat to our governance system comes from small, constantly reinforcing (positive feedback) change: like Tesla's harmonic vibration device that caused an earthquake in Manhattan. i believe that such change has been and continues to occur—but that is just "conspiracy theory nonsense."

davew


On Fri, Aug 30, 2024, at 8:09 AM, glen wrote:
> First the "analogy", in case that wasn't clear: Any tech (like any 
> myopically intentional action) has things that it considers and things 
> it ignores. E.g. when designing an AI powered chatbot, if you ignore 
> that ~1/2 the training material is racist, you'll get a racist chatbot, 
> no matter how well it passes the Turing test. Similarly, if you design 
> your government without considering a Gamer like Trump, i.e. assuming 
> presidents (candidates and elected) play in Good Faith, you'll fail to 
> consider those edge cases ... fail to harden your technology against 
> abuse. The extent to which governments *are* technologies, rather than 
> being merely an analogy is sophistry. Culture is just a superset of 
> technology.
>
> Second the driving toward existential threats: Any intentional action 
> is myopic. (I'll take the opportunity to plug Wolpert's paper again 
> <https://arxiv.org/abs/0708.1362>. Whether you buy his argument are not 
> isn't necessary. Practically the overwhelming majority of intentional 
> action is myopic.) That implies there are unintentional consequences to 
> every action. The larger the (compositional) action, the larger the 
> impact of the unintentional consequences. So big techs like the 
> agricultural revolution, industrial revolution, GAMAM, etc. have huge 
> impacts that we, almost by definition, can/will not know or understand.
>
> In order for these huge unintended consequences to be non-existential 
> (or non-threatening), we have to understand them well enough to define 
> their boundaries ... circumscribe their effects. We can't do that ... 
> or, at least, we have no political will to do that. Maybe we'll get 
> lucky and all this churn will simply work out in some beautiful 
> co-evolutionary cauldron of innovation, ever progressing to a Utopian 
> dream? Or maybe not. But even if it is bounded in some meaningful 
> sense, that cauldron still produces existential threats, at least 
> sporadically.
>
> My reaction to Roger's assertion was that "group of self-selected 
> fellow citizens" and "under no system of governance whatsoever" were 
> either too vague or too idealistic for me to parse. So I didn't try.
>
> On 8/29/24 10:15, steve smith wrote:
>> Glen -
>> 
>>> Yeah, I'm not much of a fan of Pinker (et al)'s arguments that show dropping infant mortality, poverty, violent crime, etc. But there is a point to be made that our governments, as technologies, are making a difference ... at least in *some* measures. Of course, governments are just like the other technologies and are pushing us toward existential threats like authoritarianism and climate change.
>> 
>> Can you elaborate on this:  "just like other technologies" and "pushing us toward existential threats"?
>> 
>> I have my own intuition and logic for believing this but rather than blather it all out here, I'd like to peek under your assertion and see what you are thinking on this topic?
>> 
>> Also wondering if you or any of the usual suspects (including REC/DaveW) have thoughts about Roger's original assertion, given a stronger corollary to "Power Corrupts" stated as "Power IS Corruption"?
>> 
>> -Steve
>> 
>>>
>>> On 8/28/24 14:26, steve smith wrote:
>>>>
>>>>> There's no system of governance that hasn't been corrupted. They're all the worst forms of governance ever invented, except for the alternative of dealing with a group of self-selected fellow citizens under no system of governance whatsoever.
>>>>>
>>>>> -- rec --
>>>>
>>>> And being a fan of James Scott (The Art of not Being Governed <https://www.goodreads.com/book/show/6477876-the-art-of-not-being-governed> and Against the Grain) I am inclined to respect this POV while on the other end, I also am quite the fan of Michael Levin's perspective on "what is life?" with all of it's spread across scale and across complexity and across species (in the broadest sense).
>>>>
>>>> Until we might evolve from a slime-mold with psuedopods searching around and intruding/interpenetrating into oneanother seeking concentrated resources (like Russia's into Ukraine and now vice-versa, or Israel/Palestine/Lebanon/???).  Might we (collectively) become something more like a "proper" multicellular creature or a balanced, healthy ecosystem (or system of ecosystems)?
>>>>
>>>> We have (only) been experimenting with large-scale self-organizing systems of humanity with lots of technological scaffolding (lithics/copper/bronze/iron/steel through antimatter, quantum dots, and nanotech, just to name a few?) and religio/socio/philosopho/politco linguistic technology for a handful (or two) of millenia, so it doesn't surprise me that we haven't wandered/mutated-selected our way into anything better than we have to date.
>>>>
>>>> I am (very guardedly) hopeful that the acceleration of the latter (linguistic technology) in LLMs and other ML/AI (material technology) will give us the possibility of rushing this phase forward.  PInker might claim we have had material (and psycho-social-spiritual) advancement over the centuries and decades and maybe he is right in some sense...  but the leap-forward in collective self-governance/regulation/homeostasis we can all seem to imagine living under feels beyond our (heretofore?) grasp.
>>>>
>>>> For better or worse, it feels to me that Kurzweil for all his nonsense in predicting an imminent singularity may be right... we will either self-organize in a Asimovian Foundation/Psychohistory galaxy-spanning culture (almost surely not) future or implode in a Mad Max (or grey-goo/planet-krypton) apocalypse.  Maybe even in my lifetime, almost assuredly in my children or grandchildren's?
>>>>
>
>
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/



More information about the Friam mailing list