[FRIAM] Tragedy of the Commons & Free Riders

Roger Critchlow rec at elf.org
Thu Apr 1 09:34:15 EDT 2021


give a man a fish and you feed him for a day; teach a man to fish and
you feed him for a lifetime;
explain "profit" and you have no fish

-- rec --

On Wed, Mar 31, 2021 at 11:12 PM Eric Charles <
eric.phillip.charles at gmail.com> wrote:

> Hey there bub! You don't get to pawn your hard problems off on me! I mean
> whatever you would mean :- )
>
> I want to live in a society that solves the majority of these problems for
>> me, on a regular basis, in a regular way, with a regular solution.
>
>
> Yeah, agreed, that would be great. And it would be even better (right?) if
> the problems never arose, because society preempted them, rather than
> solving them after the fact.
>
> What does "better individuals" mean?
>
>
> People who don't create the problems you are concerned with solving...
> whatever those problems might be. In this particular case, we started out
> talking about tragedy-of-the-commons problems and the false free-rider
> problem. We could have people who encounter exactly those situations, with
> default algorithms that avoid the "problem" part.  If there are other
> things that you think would make the world better, we can tack those on
> too.
>
> I'm arguing for a middle-out approach.
>
>
> I'm once again not sure that what you're describing is much different than
> what I'm arguing for. You list pitfalls of being too invested in a purely
> bottom-up approach or a purely top-down approach, and I agree those are
> problems to be avoided.
>
>  it's reasonable to suggest that all the ranchers get together on a
>> dynamically allocated schedule to [dis]agree politely about who's (not)
>> doing what. Such on-demand problem solving is certainly possible. And the
>> more open-ended the participants' toolbox solutions are, the more likely
>> that will happen
>
>
> That sounds nice, but definitely isn't what I'm suggesting. Let's say you
> have 3 people grazing on the commons, and that the land could provide ideal
> conditions for 12 cattle, with a standard tragedy of the commons set up
> (where 13 cows produces less meat, but whoever has an extra cow has more
> meat than that individual would have without the extra cow). You could
> build people for whom the 0-int, algorithmic response to such a situation
> was simply to bring 4 cows each. If you had those people to start with, it
> would take effort to explain to them why they might want to be open-ended
> in considering bringing an extra cow. Everything about trying to make an
> extra buck by undermining the collective would be unintuitive to them. They
> wouldn't have to talk through solving the problem, their default approach
> the situation would simply not lead to "tragedy".
>
> This is a means by which society can "solve the problem for you". One way
> or another the solution is: People who don't do "tragedy" when presented
> with a commons. The question is how we get such people. Maybe we get such
> people because we fine or arrest anyone who starts down the tragic path.
> Maybe let people head that direction, but we have a tax system that
> continuously removes extra funds from their pockets and funnels that extra
> monies into the maintenance of the commons, thereby creating people who
> indirectly pay to clean up the the almost-tragedies they would otherwise
> create. Presumably many other solutions are available. However, the ideal
> solution, I assert, *if we could achieve it,* would simply be to have
> people who don't want to *do* "tragedy", despite having the opportunity.
> If no one wants to bring an extra cow, then we're good from the get go.
>
> P.S. I take my whiskey neat because that's the best way to have it. When I
> drink coffee, it gets a lot of milk and sugar. If I'm not in a mood to
> specify, then there are plenty of things to drink that tastes good without
> modification ;- )
>
>
> On Mon, Mar 29, 2021 at 11:19 AM uǝlƃ ↙↙↙ <gepropella at gmail.com> wrote:
>
>> OK. I have to make a technical correction, first. Then I'll move on to
>> the meat. By 0-int, I mostly mean "algorithmic". I'm not suggesting that
>> 0-int agents are simple or uncomplicated, only that they are limited in the
>> open-endedness of their "algorithms". By claiming I'm a 0-int agent, what I
>> mean is that I'm a fairly limited set of canned solutions/heuristics to
>> common situations. Coming up with a *new* solution for every tweaked
>> circumstance presents a combinatorial explosion that my pathetic toolbox of
>> heuristics is incapable of handling.
>>
>> E.g. your rancher example -- Sure, it's reasonable to suggest that all
>> the ranchers get together on a dynamically allocated schedule to [dis]agree
>> politely about who's (not) doing what. Such on-demand problem solving is
>> certainly possible. And the more open-ended the participants' toolbox
>> solutions are, the more likely that will happen [⛧]. But sometimes ...
>> sometimes, even open-ended, benefit-of-the-doubt, polite people need some
>> time off from all that custom, on-demand, solution finding.
>>
>> That's why I order off the menu instead of what actually tastes better.
>> Why I take my coffee black and my whiskey neat. Etc. I just don't have the
>> energy, or algorithmic flexibility to custom-design a solution to each and
>> every circumstance.
>>
>> I.e. I DON'T WANT to be a better person. I want to live in a society that
>> solves the majority of these problems for me, on a regular basis, in a
>> regular way, with a regular solution.
>>
>> That's the technicality. The meat of our disagreement, I think, is
>> bottom-up vs top-down. To the extent we could indoctrinate every individual
>> toward open-endedness, spread them out and give them the energy and
>> curiosity required, it would provide for better coverage, both in time
>> (rates of solutions) and space (variables addressed, composition of
>> solutions, etc.). I agree with you on that point. But by disagreeing with
>> you, I'm NOT taking a top-down perspective ... like some kind of state
>> communism. I'm arguing for a middle-out approach. Starting from the bottom,
>> with no destination in mind, ensures you'll never arrive. Starting at the
>> top, with no atomic model of composable parts, ensures your castle will
>> sink into the swamp. The only rational approach is middle-up and
>> middle-down (not least because assertions about the bottom and top are
>> metaphysical).
>>
>> So, targeting your concrete language directly, What does "better
>> individuals" mean? Without a rock-solid idea of what an individual is, and
>> without a rock-solid value system of what's "better", any system designed
>> to optimize individuals will have more unintended than intended
>> consequences. Would hyper-specialized magnet schools be better than broad
>> spectrum liberal arts? Are "screens" bad and ink-on-paper good? Should all
>> children be taught category theory *first*? Etc. There's plenty of work to
>> do in this area. But ultimately, unless the higher order systems in which
>> the individuals live are studied and engineered, the work done at the
>> individual scope is Pyrrhic.
>>
>>
>> [⛧] In fact, it does happen, regardless of how intricate the landscape.
>> No matter what bureacracy you put in place, no matter how detailed and
>> complete, the open-ended people will wiggle within or around it.
>>
>>
>> On 3/28/21 2:37 PM, Eric Charles wrote:
>> > I'm not sure "optimism" is the point of disagreement, beyond maybe
>> optimism regarding our ability to probabilistically manipulate development
>> trajectories.
>> >
>> >     Our task... is to classify which infrastructure increases liberty.
>> And to engineer it into place.
>> >
>> >
>> > Yes, at least in the abstract I think we agree on that. The next
>> question, I think, is whether we can find a way to characterize the types
>> of infrastructure that tend to be good vs types that tend to be bad.
>> (Acknowledging all the potential pitfalls of trying to categorize.) My bias
>> is that, so much as efforts show to be possible, we have infrastructure
>> that builds better people, rather than towards infrastructure that makes it
>> impossible for people to behave badly.
>> >
>> >
>> >     Your setup attributes more commitment/energy from the agents ...
>> energy they don't have or are unwilling to spend on that organizing setup.
>> I tend to regard the agents as less intelligent than you consider them,
>> likely because I'm less intelligent than you are ... we all generalize from
>> our selves.  I grok 0-intelligence agents because I am a 0-int agent! .
>> >
>> >
>> > If I gave that impression, I apologize! Obviously your insinuation that
>> I generally prefer dealing with intelligent people is correct, but also I
>> am quite adverse to the idea that the world would be better if everyone
>> suddenly became much more intelligent. Intelligent people do stupid and
>> horrible things all the time, in the few intellectual/polymath groups .
>> >
>> > To try to be more clear, I do *not* think we need to make people so
>> intelligent that they understand the "tragedy of the commons", and avoid it
>> due to their superior intellectual skills. I'm *just* saying that we make
>> people who, when they find themselves in such situations, behave in ways
>> that don't lead to tragedy. Can we do that with 100% accuracy? Probably
>> not. Or, at the least, not without methods I would probably judge
>> unethical. Could we arrange learning situations so that a much larger % of
>> the population avoided tragedy in such situations? For sure. Childrens TV,
>> video games with different rule sets, curricular lessons that engage
>> students in relevant situations without ever once mentioning the "tragedy"
>> or trying to reason it out explicitly, etc. The goal is to produce
>> "0-intelligence agents" that achieve better outcomes in commons-types of
>> situations, without having to think it through.
>> >
>> > Some, who specialize in education, or other related topics, will later
>> learn why the curriculum is constructed in that way, but that's a different
>> conversation, for a much latter stage of life.
>> >
>> > This happens all the time, quite effectively, but usually to shitty
>> ends. Laws, regulations, and engrained habits that disallowed the showing
>> of interracial couples on TV created generations that finds such things
>> "unnatural". Ditto any non-vanilla variety of sexual behavior and gender
>> identification. Flooding the airwaves the other direction is creating a
>> generation that, when in 0-intelligence mode, navigates through the world
>> quite differently. My kids are only a few years apart and I can see it
>> between them. We were watching the a cartoon and my older said something
>> like "It's great to see a show with all this gender representation and
>> without being judgy about sexuality", and my younger looked confused and
>> honestly asked "Why is that odd?"
>> >
>> > I don't think it is overly optimistic to think we could make
>> significant progress on the issues you and I both care about with
>> investment in /that /type of infrastructure.
>> >
>> >
>> >
>> >
>> > On Thu, Mar 25, 2021 at 5:48 AM ⛧ glen <gepropella at gmail.com <mailto:
>> gepropella at gmail.com>> wrote:
>> >
>> >     We do agree in our values. But we disagree in our optimism. The
>> ecology you propose burns more energy than mine. Your setup attributes more
>> commitment/energy from the agents ... energy they don't have or are
>> unwilling to spend on that organizing setup. I tend to regard the agents as
>> less intelligent than you consider them, likely because I'm less
>> intelligent than you are ... we all generalize from our selves. I grok
>> 0-intelligence agents because I am a 0-int agent! You, having rolled up a
>> good character at the start of the campaign, are deluded into thinking
>> everyone else also rolled well. 8^D
>> >
>> >     In Utopia, all the agents spend reasonable amounts of energy, along
>> diverse channels, to drive the ecology. But in this world, government is a
>> necessary efficiency. Throughout history, when we *rely* on the individuals
>> to do all this diverse work, they don't, even if, in an ideal world, they
>> could.
>> >
>> >     So we build infrastructure, eg government, to make the individuals
>> more effective, to channel whatever energy/intelligence they have.
>> >
>> >     Where our worlds meet, though, is that SOME infrastructure is
>> debilitating. And SOME infrastructure is liberating. We agree that
>> liberating government is good. And debilitating government is bad.
>> >
>> >     Our task, then, is to classify which infrastructure increases
>> liberty. And to engineer it into place. But that's very hard when so many
>> of us maintain, despite the evidence, that all infrastructure is always bad.
>> >
>> >
>> >     On March 24, 2021 8:24:38 PM PDT, Eric Charles <
>> eric.phillip.charles at gmail.com <mailto:eric.phillip.charles at gmail.com>>
>> wrote:
>> >     >I think we probably pretty much agree.
>> >     >
>> >     >"It's a convenient fiction, or perhaps an approximating
>> simplification"
>> >     >---
>> >     >Yes! But we need some of those, and "the individual" is one that
>> >     >appeals to
>> >     >me.
>> >     >
>> >     >
>> >     --
>> >     glen ⛧
>>
>>
>> --
>> ↙↙↙ uǝlƃ
>>
>> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
>> FRIAM Applied Complexity Group listserv
>> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
>> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives: http://friam.471366.n2.nabble.com/
>>
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20210401/0a6214f1/attachment.html>


More information about the Friam mailing list