[FRIAM] Selective cultural processes generate adaptive heuristics

Marcus Daniels marcus at snoutfarm.com
Wed Apr 13 15:09:04 EDT 2022


To bend threads a little, suppose we have 10,000 Tesla-like cars in full self-driving mode in some part of a city.   They coordinate an optimization model that maximizes throughput between different destinations by choosing routes that don’t interfere.   That isn’t a sort of culture?

As for attention, I don’t have enough of it to follow the innovations:  https://arxiv.org/pdf/2103.16775.pdf


From: Friam <friam-bounces at redfish.com> On Behalf Of Prof David West
Sent: Wednesday, April 13, 2022 11:56 AM
To: friam at redfish.com
Subject: Re: [FRIAM] Selective cultural processes generate adaptive heuristics

Glen asked: "Do we have self-attending machines that can change what parts of self they're attending? Change from soft to hard? Allow for self-attending the part that's self-attending (and up and around in a loopy way)? To what extent can we make them modal, swapping from learning mode to perform mode?"

I'll not attempt a direct response, yet; but I am certain I will have one in a few days. I am in the middle of digesting:
The Master and his Emissary
The Matter With Things, vol I: The Ways to Truth
The Matter With Things. vol II: What Then Is True

All by Iain McGilchrist

(total pages a little over 1500)

Attending is a key concept.

All center on the bicameral mind, how the two sides work cooperatively and constantly, but how they offer different perspectives and different means of attending.

A key thesis is that the "rationale" left-brain has assumed dominance and distorts our view of the world and of ourselves.

I have long contended (since my Ph. D. thesis in 1988) that AIs will never equal human  intelligence because they cannot and do not participate in "culture." From McGilchrist, I will be amending / extending that argument to include, "because they lack a right brain."

davew


On Wed, Apr 13, 2022, at 8:36 AM, glen wrote:
> But we don't "create the neural structure over and over", at least we
> don't create the *same* neural structure over and over. One way in
> which big-data-trained self-attending ANN structures now mimic meat
> intelligence is in that very intense training period. Development (from
> zygote to (dysfunctional) adult) is the training. Adulting is the
> testing/execution. But these transformer based mechanisms don't seem,
> in my ignorance, to be as flexible as those grown in meat. Do we have
> self-attending machines that can change what parts of self they're
> attending? Change from soft to hard? Allow for self-attending the part
> that's self-attending (and up and around in a loopy way)? To what
> extent can we make them modal, swapping from learning mode to perform
> mode? As SteveS points out, can machine intelligence "play" or
> "practice" in the sense normal animals like us do? Are our modes even
> modes? Or is all performance a type of play? To what extent can we make
> them "social", collecting/integrating multiple transformer-based ANNs
> so as to form a materially open problem solving collective?
>
> Anyway, it seems to me the neural structure is *not* an encoding of a
> means to do things. It's a *complement* to the state(s) of the world in
> which the neural structure grew. Co-evolutionary processes seem
> different from encoding. Adversaries don't encode models of their
> opponents so much as they mold their selves to smear into, fit with,
> innervate, anastomose [⛧], their adversaries. This is what makes 2
> party games similar to team games and distinguishes "play" (infinite or
> meta-games) from "gaming" (finite, or well-bounded payoff games).
>
> Again, I'm not suggesting machine intelligence can't do any of this; or
> even that they aren't doing it to some small extent now. I'm only
> suggesting they'll have to do *more* of it in order to be as capable as
> meat intelligence.
>
> [⛧] I like "anastomotic" for adversarial systems as opposed to
> "innervated" for co-evolution because anastomotic tissue seems (to me)
> to result from a kind of high pressure, biomechanical stress. Perhaps
> an analogy of soft martial arts styles to innervate and hard styles to
> anastomose?
>
> On 4/12/22 20:43, Marcus Daniels wrote:
>> Today, humans go to some length to record history, to preserve companies and their assets.  But for some reason preserving the means to do things -- the essence of a mind -- this has this different status.  Why not seek to inherit minds too?  Sure, I can see the same knowledge base can be represented in different ways.   But, studying those neural representations could also be informative.   What if neural structures have similar topological properties given some curriculum?  What a waste to create that neural structure over and over..
>>
>> -----Original Message-----
>> From: Friam <friam-bounces at redfish.com<mailto:friam-bounces at redfish.com>> On Behalf Of Steve Smith
>> Sent: Tuesday, April 12, 2022 7:22 PM
>> To: friam at redfish.com<mailto:friam at redfish.com>
>> Subject: Re: [FRIAM] Selective cultural processes generate adaptive heuristics
>>
>>
>> On 4/12/22 5:53 PM, Marcus Daniels wrote:
>>> I am not saying such a system would not need to be predatory or parasitic, just that it can be arranged to preserve the contents of a library.
>>
>> And I can't help knee-jerking that when a cell attempts to live forever (and/or replicate itself perfectly) that it becomes a tumour in the
>> organ(ism) that gave rise to it, and even metastasizes, spreading it's hubris to other organs/systems.
>>
>> Somehow, I think the inter-planetary post-human singularians are more like metastatic cells than "the future of humanity".   Maybe that is NOT a dead-end, but my mortality-chauvanistic "self" rebels.   Maybe if I live long enough I'll come around... or maybe there will be a CAS mediated edit to fix that pessimism in me.
>>
>>
>>>> On Apr 12, 2022, at 4:29 PM, glen <gepropella at gmail.com<mailto:gepropella at gmail.com>> wrote:
>>>>
>>>> Dude. Every time I think we could stop, you say something I object to. >8^D You're doing it on purpose. I'm sure of it ... like pulling the wings off flies and cackling like a madman.
>>>>
>>>> No, the maintenance protocol must be *part of* the meat-like intelligence. That's why I mention things like suicide or starving yourself because your wife stops feeding you. To me, a forever-autopoietic system seems like a perpetual motion machine ... there's something being taken for granted by the conception ... some unlimited free energy or somesuch.
>>>>
>
> --
> Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙
>
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam<http://bit.ly/virtualfriam>
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:
>  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20220413/d080b591/attachment.html>


More information about the Friam mailing list