<div dir="ltr"><div>give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime; explain "profit" and you have no fish<br></div><div><br></div><div>-- rec --</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Mar 31, 2021 at 11:12 PM Eric Charles <<a href="mailto:eric.phillip.charles@gmail.com">eric.phillip.charles@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hey there bub! You don't get to pawn your hard problems off on me! I mean whatever you would mean :- )<div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font color="#0000ff">I want to live in a society that solves the majority of these problems for me, on a regular basis, in a regular way, with a regular solution.</font></blockquote><div><br></div><div>Yeah, agreed, that would be great. And it would be even better (right?) if the problems never arose, because society preempted them, rather than solving them after the fact. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font color="#0000ff">What does "better individuals" mean? </font></blockquote><div><br></div><div>People who don't create the problems you are concerned with solving... whatever those problems might be. In this particular case, we started out talking about tragedy-of-the-commons problems and the false free-rider problem. We could have people who encounter exactly those situations, with default algorithms that avoid the "problem" part. If there are other things that you think would make the world better, we can tack those on too. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font color="#0000ff">I'm arguing for a middle-out approach.</font></blockquote><div><br></div><div>I'm once again not sure that what you're describing is much different than what I'm arguing for. You list pitfalls of being too invested in a purely bottom-up approach or a purely top-down approach, and I agree those are problems to be avoided. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><font color="#0000ff"> it's reasonable to suggest that all the ranchers get together on a dynamically allocated schedule to [dis]agree politely about who's (not) doing what. Such on-demand problem solving is certainly possible. And the more open-ended the participants' toolbox solutions are, the more likely that will happen</font></blockquote><div><br></div><div>That sounds nice, but definitely isn't what I'm suggesting. Let's say you have 3 people grazing on the commons, and that the land could provide ideal conditions for 12 cattle, with a standard tragedy of the commons set up (where 13 cows produces less meat, but whoever has an extra cow has more meat than that individual would have without the extra cow). You could build people for whom the 0-int, algorithmic response to such a situation was simply to bring 4 cows each. If you had those people to start with, it would take effort to explain to them why they might want to be open-ended in considering bringing an extra cow. Everything about trying to make an extra buck by undermining the collective would be unintuitive to them. They wouldn't have to talk through solving the problem, their default approach the situation would simply not lead to "tragedy". </div><div><br></div><div>This is a means by which society can "solve the problem for you". One way or another the solution is: People who don't do "tragedy" when presented with a commons. The question is how we get such people. Maybe we get such people because we fine or arrest anyone who starts down the tragic path. Maybe let people head that direction, but we have a tax system that continuously removes extra funds from their pockets and funnels that extra monies into the maintenance of the commons, thereby creating people who indirectly pay to clean up the the almost-tragedies they would otherwise create. Presumably many other solutions are available. However, the ideal solution, I assert, <i>if we could achieve it,</i> would simply be to have people who don't want to <i><u>do</u></i> "tragedy", despite having the opportunity. If no one wants to bring an extra cow, then we're good from the get go. <br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div><div>P.S. I take my whiskey neat because that's the best way to have it. When I drink coffee, it gets a lot of milk and sugar. If I'm not in a mood to specify, then there are plenty of things to drink that tastes good without modification ;- ) </div><div dir="ltr"><br></div></div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 29, 2021 at 11:19 AM uǝlƃ ↙↙↙ <<a href="mailto:gepropella@gmail.com" target="_blank">gepropella@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">OK. I have to make a technical correction, first. Then I'll move on to the meat. By 0-int, I mostly mean "algorithmic". I'm not suggesting that 0-int agents are simple or uncomplicated, only that they are limited in the open-endedness of their "algorithms". By claiming I'm a 0-int agent, what I mean is that I'm a fairly limited set of canned solutions/heuristics to common situations. Coming up with a *new* solution for every tweaked circumstance presents a combinatorial explosion that my pathetic toolbox of heuristics is incapable of handling.<br>
<br>
E.g. your rancher example -- Sure, it's reasonable to suggest that all the ranchers get together on a dynamically allocated schedule to [dis]agree politely about who's (not) doing what. Such on-demand problem solving is certainly possible. And the more open-ended the participants' toolbox solutions are, the more likely that will happen [⛧]. But sometimes ... sometimes, even open-ended, benefit-of-the-doubt, polite people need some time off from all that custom, on-demand, solution finding. <br>
<br>
That's why I order off the menu instead of what actually tastes better. Why I take my coffee black and my whiskey neat. Etc. I just don't have the energy, or algorithmic flexibility to custom-design a solution to each and every circumstance.<br>
<br>
I.e. I DON'T WANT to be a better person. I want to live in a society that solves the majority of these problems for me, on a regular basis, in a regular way, with a regular solution.<br>
<br>
That's the technicality. The meat of our disagreement, I think, is bottom-up vs top-down. To the extent we could indoctrinate every individual toward open-endedness, spread them out and give them the energy and curiosity required, it would provide for better coverage, both in time (rates of solutions) and space (variables addressed, composition of solutions, etc.). I agree with you on that point. But by disagreeing with you, I'm NOT taking a top-down perspective ... like some kind of state communism. I'm arguing for a middle-out approach. Starting from the bottom, with no destination in mind, ensures you'll never arrive. Starting at the top, with no atomic model of composable parts, ensures your castle will sink into the swamp. The only rational approach is middle-up and middle-down (not least because assertions about the bottom and top are metaphysical).<br>
<br>
So, targeting your concrete language directly, What does "better individuals" mean? Without a rock-solid idea of what an individual is, and without a rock-solid value system of what's "better", any system designed to optimize individuals will have more unintended than intended consequences. Would hyper-specialized magnet schools be better than broad spectrum liberal arts? Are "screens" bad and ink-on-paper good? Should all children be taught category theory *first*? Etc. There's plenty of work to do in this area. But ultimately, unless the higher order systems in which the individuals live are studied and engineered, the work done at the individual scope is Pyrrhic.<br>
<br>
<br>
[⛧] In fact, it does happen, regardless of how intricate the landscape. No matter what bureacracy you put in place, no matter how detailed and complete, the open-ended people will wiggle within or around it.<br>
<br>
<br>
On 3/28/21 2:37 PM, Eric Charles wrote:<br>
> I'm not sure "optimism" is the point of disagreement, beyond maybe optimism regarding our ability to probabilistically manipulate development trajectories. <br>
> <br>
> Our task... is to classify which infrastructure increases liberty. And to engineer it into place. <br>
> <br>
> <br>
> Yes, at least in the abstract I think we agree on that. The next question, I think, is whether we can find a way to characterize the types of infrastructure that tend to be good vs types that tend to be bad. (Acknowledging all the potential pitfalls of trying to categorize.) My bias is that, so much as efforts show to be possible, we have infrastructure that builds better people, rather than towards infrastructure that makes it impossible for people to behave badly.<br>
> <br>
> <br>
> Your setup attributes more commitment/energy from the agents ... energy they don't have or are unwilling to spend on that organizing setup. I tend to regard the agents as less intelligent than you consider them, likely because I'm less intelligent than you are ... we all generalize from our selves. I grok 0-intelligence agents because I am a 0-int agent! .<br>
> <br>
> <br>
> If I gave that impression, I apologize! Obviously your insinuation that I generally prefer dealing with intelligent people is correct, but also I am quite adverse to the idea that the world would be better if everyone suddenly became much more intelligent. Intelligent people do stupid and horrible things all the time, in the few intellectual/polymath groups . <br>
> <br>
> To try to be more clear, I do *not* think we need to make people so intelligent that they understand the "tragedy of the commons", and avoid it due to their superior intellectual skills. I'm *just* saying that we make people who, when they find themselves in such situations, behave in ways that don't lead to tragedy. Can we do that with 100% accuracy? Probably not. Or, at the least, not without methods I would probably judge unethical. Could we arrange learning situations so that a much larger % of the population avoided tragedy in such situations? For sure. Childrens TV, video games with different rule sets, curricular lessons that engage students in relevant situations without ever once mentioning the "tragedy" or trying to reason it out explicitly, etc. The goal is to produce "0-intelligence agents" that achieve better outcomes in commons-types of situations, without having to think it through. <br>
> <br>
> Some, who specialize in education, or other related topics, will later learn why the curriculum is constructed in that way, but that's a different conversation, for a much latter stage of life. <br>
> <br>
> This happens all the time, quite effectively, but usually to shitty ends. Laws, regulations, and engrained habits that disallowed the showing of interracial couples on TV created generations that finds such things "unnatural". Ditto any non-vanilla variety of sexual behavior and gender identification. Flooding the airwaves the other direction is creating a generation that, when in 0-intelligence mode, navigates through the world quite differently. My kids are only a few years apart and I can see it between them. We were watching the a cartoon and my older said something like "It's great to see a show with all this gender representation and without being judgy about sexuality", and my younger looked confused and honestly asked "Why is that odd?" <br>
> <br>
> I don't think it is overly optimistic to think we could make significant progress on the issues you and I both care about with investment in /that /type of infrastructure. <br>
> <br>
> <br>
> <br>
> <br>
> On Thu, Mar 25, 2021 at 5:48 AM ⛧ glen <<a href="mailto:gepropella@gmail.com" target="_blank">gepropella@gmail.com</a> <mailto:<a href="mailto:gepropella@gmail.com" target="_blank">gepropella@gmail.com</a>>> wrote:<br>
> <br>
> We do agree in our values. But we disagree in our optimism. The ecology you propose burns more energy than mine. Your setup attributes more commitment/energy from the agents ... energy they don't have or are unwilling to spend on that organizing setup. I tend to regard the agents as less intelligent than you consider them, likely because I'm less intelligent than you are ... we all generalize from our selves. I grok 0-intelligence agents because I am a 0-int agent! You, having rolled up a good character at the start of the campaign, are deluded into thinking everyone else also rolled well. 8^D<br>
> <br>
> In Utopia, all the agents spend reasonable amounts of energy, along diverse channels, to drive the ecology. But in this world, government is a necessary efficiency. Throughout history, when we *rely* on the individuals to do all this diverse work, they don't, even if, in an ideal world, they could.<br>
> <br>
> So we build infrastructure, eg government, to make the individuals more effective, to channel whatever energy/intelligence they have.<br>
> <br>
> Where our worlds meet, though, is that SOME infrastructure is debilitating. And SOME infrastructure is liberating. We agree that liberating government is good. And debilitating government is bad.<br>
> <br>
> Our task, then, is to classify which infrastructure increases liberty. And to engineer it into place. But that's very hard when so many of us maintain, despite the evidence, that all infrastructure is always bad.<br>
> <br>
> <br>
> On March 24, 2021 8:24:38 PM PDT, Eric Charles <<a href="mailto:eric.phillip.charles@gmail.com" target="_blank">eric.phillip.charles@gmail.com</a> <mailto:<a href="mailto:eric.phillip.charles@gmail.com" target="_blank">eric.phillip.charles@gmail.com</a>>> wrote:<br>
> >I think we probably pretty much agree.<br>
> ><br>
> >"It's a convenient fiction, or perhaps an approximating simplification"<br>
> >---<br>
> >Yes! But we need some of those, and "the individual" is one that<br>
> >appeals to<br>
> >me.<br>
> ><br>
> ><br>
> -- <br>
> glen ⛧<br>
<br>
<br>
-- <br>
↙↙↙ uǝlƃ<br>
<br>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .<br>
FRIAM Applied Complexity Group listserv<br>
Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam" rel="noreferrer" target="_blank">bit.ly/virtualfriam</a><br>
un/subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: <a href="http://friam.471366.n2.nabble.com/" rel="noreferrer" target="_blank">http://friam.471366.n2.nabble.com/</a><br>
</blockquote></div>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .<br>
FRIAM Applied Complexity Group listserv<br>
Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam" rel="noreferrer" target="_blank">bit.ly/virtualfriam</a><br>
un/subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer" target="_blank">http://friam-comic.blogspot.com/</a><br>
archives: <a href="http://friam.471366.n2.nabble.com/" rel="noreferrer" target="_blank">http://friam.471366.n2.nabble.com/</a><br>
</blockquote></div>