[FRIAM] health care logistics

Marcus Daniels marcus at snoutfarm.com
Mon Jan 24 16:53:43 EST 2022


I guess I don't get it.   On one hand an agent is defined by her many body interactions with others, on the other hand somehow some enduring sort of properties of the individual agents -- these principles -- matter with regard to maintaining the many body relationships.   Intuitively if some subset of agents can up and change the force fields in this many body system however they might fancy, it seems like that would, well, shake the system up in some interesting way, and shed more light on the space of the possible than if the couplings were changing in some scheduled or predictable way?   It reminds me of this article:

https://cp4space.hatsya.com/2022/01/14/conway-conjecture-settled/

"Ilkka Törmä<https://users.utu.fi/iatorm/> and Ville Salo<http://www.villesalo.com/>, a pair of researchers at the University of Turku in Finland, have found a finite configuration in Conway’s Game of Life such that, if it occurs within a universe at time T, it must have existed in that same position at time T−1 (and therefore, by induction, at time 0)."

Some predictable evolution, but really for what?

________________________________
From: Friam <friam-bounces at redfish.com> on behalf of glen <gepropella at gmail.com>
Sent: Monday, January 24, 2022 2:36 PM
To: friam at redfish.com <friam at redfish.com>
Subject: Re: [FRIAM] health care logistics

Hm. So you're distinguishing between someone calling the actor "corrupt" versus the actor, itself, thinking it's "corrupted". I agree that a closed actor proceeds with (little or) no concern for its neighbors' opinions. But that doesn't mean it's immune to corruption, even purely *social* corruption. If the environment is changed by one of the other actors such that the constraints on the closed actor change in some way, then it can send the closed actor into a self-reinforcing dynamic from which it can't escape. To boot, if the closed actor has a memory and an anticipatory capability to model itself, then the interactions between its past, present, and future self is similar to (if not identical with) social interactions, again allowing patterns like corruption.

If, inside the closed system, the physics is (somehow) open, then the anticipatory model may not be accurate enough to expect an eventual outcome. It's model is falsified and whatever faith was placed in the model in times prior, the system's been corrupted because it differs from the anticipation.

Of course, this is all "angel's on the head of a pin" because there are no completely closed systems. Even if we accept closure in some dimension, openness is required in other dimensions.

On 1/24/22 13:23, Marcus Daniels wrote:
> Glen writes:
>
> < That's not true at all. Closed systems do have disclosures in terms of the behavior of their boundary. Granted, one may not have constitutive understanding of what's happening inside the membrane. But one can profile the behavior of the surface. And if that behavior changes over time, then it's capable of corruption.>
>
> My point is that if one has enough resources, it simply doesn't matter whether the constitutive understanding of what is happening inside the membrane is achieved by outsiders, nor does it matter how outsiders evaluate the changes on the surface.  The closed system simply does not concern itself with such evaluations as they are of no consequence.   It can jump between sets of principles for any number of reasons, including amusement.   I am of course thinking of Q!  (Musk is not quite there.)
>
> https://en.wikipedia.org/wiki/Q_(Star_Trek) <https://en.wikipedia.org/wiki/Q_(Star_Trek)>
>
> Marcus
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* Friam <friam-bounces at redfish.com> on behalf of glen <gepropella at gmail.com>
> *Sent:* Monday, January 24, 2022 2:13 PM
> *To:* friam at redfish.com <friam at redfish.com>
> *Subject:* Re: [FRIAM] health care logistics
> That's not true at all. Closed systems do have disclosures in terms of the behavior of their boundary. Granted, one may not have constitutive understanding of what's happening inside the membrane. But one can profile the behavior of the surface. And if that behavior changes over time, then it's capable of corruption.
>
> On 1/24/22 13:07, Marcus Daniels wrote:
>> Closed systems don't have disclosures, so there's can't be this social notion of corruption.  I changed my mind today:  I'll put new quarters into the machine and see what happens.
>>
>> Marcus
>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>> *From:* Friam <friam-bounces at redfish.com> on behalf of glen <gepropella at gmail.com>
>> *Sent:* Monday, January 24, 2022 1:26 PM
>> *To:* friam at redfish.com <friam at redfish.com>
>> *Subject:* Re: [FRIAM] health care logistics
>> Well, yeah, I see little evidence of such principle-free attractors too ... because I'm conditioned to think all actors have prior principles and, when they achieve power, they slide into corruption. I.e. I see no evidence that there are no such thing as  unprincipled actors. And that means, as an actor drifts from one set of principles to another, that is corruption. The only way out of that is to be vague about your principles (plausible deniability) or make your principles *generic* enough to apply to multiple, parallaxed, methods/behaviors ... or both.
>>
>> Arguments that rely on generlized principles are everywhere, from biology (survival, food, gene transfer, etc.) to psychology to politics. Even if many of those turn out to actually be vagaries instead of generalizations, those are the arguments ... and in  the context of those arguments, power corrupts.
>>
>> One of the advantages the Bayesians and postmodernists have is they admit up front that their ephemerides will change. And as long as they're fairly clear about how they'll *try* to update their systems in a transparent way, then it's difficult to accuse  them of corruption. Hence, secrecy and closed systems are more readily corruptible than open ones.
>>
>> On 1/24/22 11:56, Marcus Daniels wrote:
>>> If what you mean is that there are consequences to indifference to the environment and to each other, I don't see a lot of evidence of such attractors.   If there are no principles and we are merely beings that notice attractors and naming them, there's  not  much point in ideology, religion, and so on.   Perhaps they were always delusions, and it was only the Musk's, etc. that have found, through their wealth, the autonomy to come to grips with that.   Counterexamples like Putin come to mind, where it does seem to be a reinforcement issue.
>>>
>>> Marcus
>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>> *From:* Friam <friam-bounces at redfish.com> on behalf of glen <gepropella at gmail.com>
>>> *Sent:* Monday, January 24, 2022 12:41 PM
>>> *To:* friam at redfish.com <friam at redfish.com>
>>> *Subject:* Re: [FRIAM] health care logistics
>>> Scaled need for entropy: It's not clear to me why we'd believe smaller orgs need less entropy. I agree they have smaller *stores* of "energy". And, to some extent, I can see that some ways entropy manifests could dissipate those stores more than they accumulate them. (Regarding meeting objectives as one kind of store ... e.g. using money to achieve some objective is a - perhaps inefficient - transfer from one store to another -- since Tom posted about SysDyn.) But I could easily argue that small orgs need *more* entropy than large orgs.
>>>
>>> Semantics of Corruption: Well, I agree that one can't be corrupt if one has no principles from the start. This is, I think, a fundamental part of the arguments in favor of open-ended evolution (and extending into metaphysics like parallel worlds). But even if we gave up on the idea that there's an, in principle, set of values to start with, we can still arrive at an attractor so strong that the system will never leave it. The argument against Growth and the need for a "paradigm shift" is exactly such an argument. We're so brainwashed by that paradigm, even those of us who see the engine's headlight at the end of the tunnel can't think any differently. So ... how could I say it so you agree with it? Power is self-reinforcing even when it becomes obsolete?
>>>
>>>
>>> On 1/24/22 11:30, Marcus Daniels wrote:
>>>> Employees in a large organization are in one sense cells, but in another sense parasites.   (The largest parasite being the CEO.)  Nevertheless, the organization needs the diversity of these agents -- whatever one calls them -- to innovate and survive.   Without  the entropy, the organization is just a machine, and the people can be replaced with simple robots.   It is small organizations, where there is less ability to take on debt and tolerate waste, where shared values can help keep focus in a situation of limited resources.
>>>>
>>>> I don't really buy your claim that power corrupts.   One could just as well say that being weak makes one rationalize their weakness.   If there isn't a shared value system, there is no reason to say that it has been corrupted.   Perhaps rather that once  entropy is eliminated, then death will soon follow.  Entropy could still be high and inter-group violence common.
>>>>
>>>> Marcus
>>>>
>>>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>>>> *From:* Friam <friam-bounces at redfish.com> on behalf of glen <gepropella at gmail.com>
>>>> *Sent:* Monday, January 24, 2022 12:00 PM
>>>> *To:* friam at redfish.com <friam at redfish.com>
>>>> *Subject:* Re: [FRIAM] health care logistics
>>>> At first, I struggled to see how this mapped to health care logistics. But on 2nd read, it clearly does.
>>>>
>>>> The question that now dominates is a) shared values - even if it's overshoot and we know it's overshoot, do the exploiters (and their rhetorical victims) care at all about the same things the ... "earthists" or "humanists" or "biodiversisists" might care  about? And b) nonlinear exploitation power - orthogonal to shared values, is it possible the space/landscape has changed so radically that the tiny produce we now exploit might have a huge impact going forward? (Or, maybe vice versa, every Joule we squeeze out now has a much smaller impact than the Joules we extracted in the '60s?)
>>>>
>>>> Those questions translate to health care in the form of motivation comparison between, e.g., pharma employees. Some are in it for the science. Some are in it for the money. Some are humanitarians. Etc. Do the executives share the values of their employees?  A little? A lot? The same with insurance undewriters, financialists at hospitals and offices, etc.
>>>>
>>>> Technically, it's completely reasonable to NOT implement bootstrappable systems, systems "written in" themselves. We've talked a lot on this list about self-reference and if/where we use the words "tautology" or "degeneracy". Even if we assume the shared  value that earth is just the initial *seed* for life and that seed will be a dried up husk when we diaspora into the galaxy, *when* will we have to solve the sustainability question? Perhaps we should solve it for our 2nd planet? Or maybe we iterate slowly from our current non-bootstrapping algorithm of "growth" toward an algorithm of sustainable?
>>>>
>>>> The same argument goes for the Big Software argument proffered By Dr. Coon. Sure open source packages developed by some kid in Iowa shouldn't found the entire Java-based infrastructure. But, similarly, not every piece of crypto or opsec needs to come from  Israel or the NSA. Can we move between and within Big Software and hacking? Can we move between Growth and Sustainability?
>>>>
>>>> And more importantly, should we all agree on values, like some fascist state? Or is there room for reasonable disagreement or meandering non-equilibria?
>>>>
>>>> On 1/21/22 13:00, David Eric Smith wrote:
>>>>> Some of the condensations in this thread, as causal interpretations of social dynamics, are real gems.  They are much more interesting as claims than the endlessly recycled platitudes that seem to be all I am seeing in punditry.
>>>>>
>>>>> I have wondered about sending the following to the list, but this is probably a good thread in which to do it:
>>>>> https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html> <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html>> <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html>>> <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html>>>> <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html
> <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html <https://ideas.repec.org/a/gam/jeners/v14y2021i15p4508-d601755.html>>>>>
>>>>>
>>>>> The claims are about important things.  They say that the sustainability rhetoric is so riddled with pie in the sky that it is not clear that an analysis of what we can actually do would even support goal-setting along the lines that are currently practiced.   For certain apps built on the libraries of sustainability, like the rhetoric of Green New Deal, the most-central aspiration (not curtailing population and energy consumption, and just replacing their sources) may actually be impossible in the sense that perpetual motion machines are impossible.  The other important factor is that we don’t get the dodge “but in the long run”, because the claim is that in a relatively short run we are all dead (or at least a great many of us, and the rest have greatly reduced options for what to do about anything).
>>>>>
>>>>> The important thing about the article (I know the author Rees) is that it tries to back up its claims with analysis where possible.  Some of the citations I consider a bit dodgy, but others are probably sound.  That does _not_ mean I am claiming the conclusions of the paper are right.  I haven’t done any shred of the work it would take me to backfill that tree of citations and take responsibility for deciding which of them I understand to be right.
>>>>>
>>>>> It is also important (to me, for my own reasons) to say that I do not mean _any_ blame for hypocrisy or bad faith toward a lot of the serious sustainability people, or even the GND advocates.  They work partly in a realm of human persuasion, and they are trying not to let the perfect undermine doing _something_ that might be good, or at least a little better.  I don’t know how many of the GND rhetoricians even have a detailed  understanding of our current situation, and among those (if there are any), how many would agree that it is as bad as Rees asserts.  There might be some, who would still do what persuasion they can because they don’t have ideas for what might be more helpful.
>>>>>
>>>>> I should also add that there is a lot not covered in this particular paper, where I have listened to claims of large unavoidable cascading failures.  Climate change leading to failure of Himalayan snowpacks that are the headwaters of rivers that supply drinking water, sanitation, irrigation, and hydropower to something like 1/4 of the world’s population, through infrastructure that has been built over a century, and can’t simply be moved or replaced.  That stops working and people start moving, and then all the stresses we already see around migration get amplified to much higher levels.  etc.  Those, too, I have not tried to either evaluate or get sources I can trust blindly.  But if they are real, they belong in view as well.
>>>>>
>>>>> Finally, I want to distance myself a bit from the affect and some overall impression in this piece, or by these authors.  I have no interest in whether something is heterodox or any other kind of dox.  The misanthropy that comes through in their scornful delivery in places, but also their claim that there are “graceful” exits with so little as 1-child policies, are to me departures (understandable, but still departures) from the thing that makes the article valuable, which is the substance of its claims about what exists and what can be assembled into systems.  I think one can keep the claims as important questions and let the other stuff go its own ways.
>>>>>
>>>>> Anyway, more than I know how to chew on,
>>>>>
>>>>> Eric
>>>>>
>>>>>
>>>>>
>>>>>> On Jan 21, 2022, at 11:47 AM, glen <gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com>>>>>> wrote:
>>>>>>
>>>>>> Well, except that this solipsism betrays a profound similarity between the cheerful billionaire exploiter and the unfixable deplorables. It's almost psychotically self-centered. I can imagine a slow, corrupting process where I would if I could, as well. But that transformation would have to be complete closure to prevent any light of empathy or sympathy from peeking in and popping the boil.
>>>>>>
>>>>>> I suppose people like Gates are more interesting than Musk, shambling about extruding money according to an opaque template ... less transparently ideological than Musk's profiteering. All philanthropy smacks of this sort of thing, though, Effective Altruism being the worst of the bunch. Power corrupts. It's not a lesson the non-powerful can actually learn, though. So it's a good thing to keep around a nicely scaled gradation of the super rich and the destitute poor, with some walkability up and down the scale. That way we can, as a collective, re-learn the lesson that power corrupts on a steady basis. The assumption of equality prevents that lesson from being re-learned. The absurdity of philanthropy and poverty are "collateral damage" in service of the latent trait, spoken as a well-off white man born into a racist patriarchy, anyway.
>>>>>>
>>>>>> On 1/21/22 08:31, Marcus Daniels wrote:
>>>>>>> If anything, Musk is suspicious because he is not overtly apocalyptic.   Some criticisms of Don’t Look Up were along the lines that it fails to try to persuade a change of course in favor of being condescending.  That was the whole point of the movie:   Comic  relief among the reasonable who must suffer those who are just unfixable.  Musk is amusing because he is cheerful going about his billionaire life as it all comes crashing down.  Doing what he can to profit from insane energy policy of the last several generations and making what contingency plans he can.  I certainly would if I could.
>>>>>>>> On Jan 21, 2022, at 7:48 AM, glen <gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com <mailto:gepropella at gmail.com>>>>>> wrote:
>>>>>>>>
>>>>>>>> This video essay concludes with the same point:
>>>>>>>>
>>>>>>>> The Fake Futurism of Elon Musk
>>>>>>>> https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y> <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y>> <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y>>> <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y>>>> <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y <https://youtu.be/5OtKEetGy2Y>>>>>
>>>>>>>>
>>>>>>>> Perhaps a better title would have been "Muskian Futurism is Eschatological". But there's some deeper stuff there in the middle of the video about the appeal of geezers like Sanders to "the youth", perhaps dovetailing with our prior discussion of the [opt|pess]imism vs hope-despair plane. The mistake the Muskians seem to make is conflating Musk's "apocalyptic help the rich survive the end times capitalism" with the good old fashioned future orientation of classic science fiction ... and, perhaps, even the optimistic glossing of the present by authors like Steven Pinker. While Pinker seems to be a hypnotized neoliberal cultist, his views still retain some sense of "shared values" in the Enlightenment, where something, vague as it is, like equality founds the whole perspective. Egalitarian utopias like Star Trek were, it seemed to me, standard fare for classic sci-fi. Gibson, Blade Runner, et al turned that dark and brought us (perhaps correlated with the rise of Hell and
>>>>>>>> Brimstone Christianity) to Muskianism.
>>>>>>>>
>>>>>>>> But this is all just from my nostalgizing as a dying white man. It would be interesting to see a disinterested historian present the plectic arcs.
>>>>>>>>
>>>>>>>>> On 1/20/22 14:33, glen wrote:
>>>>>>>>> Even if there are multiple paths to nearly equivalent optima, each unit (human, hospital, corporation, state) has to share some values with the others in order for the the optima to be commensurate.
>>>>>>>>


--
glen
Theorem 3. There exists a double master function.

.-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:
 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20220124/f73c41a6/attachment.html>


More information about the Friam mailing list