[FRIAM] aversive learning

David Eric Smith desmith at santafe.edu
Wed Sep 1 18:08:37 EDT 2021


Yes!  Really pleasing project idea.  

My suspicion is the same as yours.  Improved coverage; some small increase in variability.  So the question becomes, what is the Pareto surface?

There is a funny thing barely like this in language acquisition.  Every language has a limited inventory of phonemes, though that number can vary wildly, from perhaps <20 in deep-polynesian languages that have had small, geographically-dissipating populations for a very long time, to more than 100 in groups that seem to be old and in one place for a long time, particularly Khoi-san and North Caucasian (Georgian?).  But the full set of phonological segments in the world’s languages must run to the thousands, or to be nearly a continuum, if one included all the dialects.  Keeping empty space between them within any one language is important for the transition in the 9-12-month age range, when babies start to form phonemes from the phonological continue in their environments.  Probably important for perception as well.

So the interesting thing comes up for kids who inhabit very polyglot environments.  It seems that they can learn up to maybe 4-7 languages as fully-native, without noticeable loss of performance in any of them.  (Although here, I have questions about whether they sacrifice robustness in high-noise environments, because they have more segmentation patterns available to draw on.)  Above some number, though, I think the possibility to learn gets impaired, because the set of valid phonemes is too dense as languages get switched, with not enough space between them.  

I wonder if there is anything like this in the reinforcement-dynamics of immune cell population dynamics.  Probably the things we do with current vaccines are too low-dimensional to approach such questions.

Eric

> On Sep 2, 2021, at 6:44 AM, Marcus Daniels <marcus at snoutfarm.com> wrote:
> 
> Perhaps it could be done with AlphaFold?
> I wouldn't be surprised if mixing vaccines would be more effective.   A little more coverage, but mostly the same information.
> 
> -----Original Message-----
> From: Friam <friam-bounces at redfish.com> On Behalf Of David Eric Smith
> Sent: Wednesday, September 1, 2021 2:37 PM
> To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
> Subject: Re: [FRIAM] aversive learning
> 
> There’s a statistic I would like to see, but would be costly to collect and encounter ethics troubles (withholding of known help), so will probably not be available.
> 
> For 2-shot vaccines, it is considered important that the second shot be the same formula as the first.  Reason being that we build up some set of antibody-producing, recognition, or killer cells as a kind of library, and the second shot then amplifies their number (perhaps also their distribution in the body).  There can be (it is believed) enough difference between the proteins presented by one vaccine and by another that you can’t be sure whatever presented-protein shape your cells happened to recognize on one would be reliably available on the other.  
> 
> But how much does it matter?  If two 2-shot mRNA vaccines are both effective against live wild-type virus and several of its derived mutants, how different can they then be from it or from each other.  If you gave mixed first and second shots, would there be something like “hybrid vigor” in the protection statistics, or its opposite (whatever that would be called): a clear signal of reduced protectiveness?
> 
> If one could get such data for vaccines, what would be the out-of-sample predictivity against new mutants?  Which ones?
> 
> And that’s just a protein or two.
> 
> Eric
> 
> 
>> On Sep 2, 2021, at 4:39 AM, uǝlƃ ☤>$ <gepropella at gmail.com> wrote:
>> 
>> Well, induction relies on big data. So, you might be right that mimicking prose from a particular target is easier than mimicking poetry from a particular target. But that relies on low distribution drift. For example, if a poet always uses the same rules for all her poetry, then her product is more readily mimic-able than if she writes in a large variety of different styles. Then you might be able to crank out work in any one of her styles, but not be able to mimic *that* poet.
>> 
>> Similarly, for prose authors who write different things, e.g. crossing genre (fantasy to true crime or YA or whatever) or crossing story type (unreliable narrator, nonlinearity, etc.) versus prose authors who always create the same type of thing.
>> 
>> We have this same problem with synthetic data in machine learning. While it may *seem* like the synthetic data you generated from distributions learned from real data is a good mimic, there may be occult distributions in that real data that you didn't manage to induce (at the time, with the methods you used, etc.). So, if some other method of analysis were applied to both the real data and the synthetic data drawn from it, you might find drastic differences between the synthetic data and its referent real data.
>> 
>> So, while I accept your story while using vague words like "style of any poet" and such, my guess is the claim would fall flat under scrutiny. I'm often wrong. So, if you'll provide the materials, others can go through the same steps you went through to examine Gabriel's products. For example, whatever it is that you, Dave, the reader, think is "Hunter Thompson style" may not be much like what I think is "Hunter Thompson style" ... and that might be true even if the objective parameters (induced from his prose or driving the algorithm) are the same.
>> 
>> 
>> On 9/1/21 12:19 PM, Prof David West wrote:
>>> au contraire,
>>> 
>>> prose is simpler than poetry. Mostly because there are more rules and constraints. Also, statistical analysis of prose to correctly identify author has been a thing for a long time. Richard has a really cool example of a prose story that emulates Hunter Thompson that, I would bet, no one on this list could have detected as a deep fake had you not be forewarned.
>>> 
>>> davew
>>> 
>>> 
>>> On Wed, Sep 1, 2021, at 11:50 AM, uǝlƃ ☤>$ wrote:
>>>> Yeah, both social media posts *and* poetry are a low bar. Machine 
>>>> generated prose is more difficult, I expect. There are good examples 
>>>> from GPT3. But I don't know of any other algorithm that does a 
>>>> decent job. So I doubt the same techniques Gabriel uses to generate 
>>>> poetry and social media posts would work to generate *some* of our 
>>>> postings, particularly the long-winded amongst us.
>>>> 
>>>> My own play with MegaHAL generated obvious garbage.
>>>> 
>>>> On 9/1/21 10:25 AM, Prof David West wrote:
>>>>> Richard Gabriel has created software that can generate poetry in the style of any poet. It also generates poetry that passes the Turing test in that experts are unable to distinguish between machine generated poetry and human generated poetry. He demoed this at an annual meeting of poets at Warren Wilson College (where Richard got his MFA).
>>>>> 
>>>>> I am certain he could use his program to create FRIAM posts that could emulate any of us.
>>>>> 
>>>>> He also, for IBM on a DoD contract, created a NL program that monitored social media posts, detected those deemed inimical to government interests (e.g setting up a flash mob to protest the visit of a political personage), and generate counter postings (e.g., moving the mob to a pig farm instead of the county court house because "inside sources" confirm the personage changed her itinerary).
>>>>> 
>>>>> Of course social media postings create a pretty low bar for an AI to be convincing.
>>>>> 
>>>>> davew
>>>>> 
>>>>> 
>>>>> On Wed, Sep 1, 2021, at 10:33 AM, Marcus Daniels wrote:
>>>>>> 
>>>>>> If we collected years of FRIAM archives and train it with a recycle GAN, I think it would probably be possible to generate plausible sentences of each other.  To the extent we pay attention to what we say at all; so it might not be the hard to fake really.   I think we could get the basic intent of all the regulars, if not the details of their writing (which the GAN would get).   I’ve often wished for a ML avatar that could stand in for me on Zoom meetings, so I could go play with my dog or go running or whatever.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> *From:* Friam <friam-bounces at redfish.com> *On Behalf Of 
>>>>>> *thompnickson2 at gmail.com
>>>>>> *Sent:* Wednesday, September 1, 2021 9:21 AM
>>>>>> *To:* 'The Friday Morning Applied Complexity Coffee Group' 
>>>>>> <friam at redfish.com>
>>>>>> *Subject:* Re: [FRIAM] aversive learning
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Would I pass the turing test if I could, by my emails, convince you that I was Dave?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Or is that just the dave Test.  Would I pass the Turing test if I could convince you that I was Turing?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Who knows what evil lurks in the hearts of men!
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> n
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Nick Thompson
>>>>>> 
>>>>>> ThompNickSon2 at gmail.com <mailto:ThompNickSon2 at gmail.com>
>>>>>> 
>>>>>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwordpress.clar
>>>>>> ku.edu%2fnthompson%2f&c=E,1,Hw-cwIVR_eEV5qfxYNlMv92f6sbBOzOqgt3VQO
>>>>>> LFyN9mT4tmHXm9fAwBbIeRoDpAhwO1QnCSXN-4JGq2vRXPcQ8EsmDjx2Wpw0Hrh_3l
>>>>>> -5i692-4gGmkeKs,&typo=1 
>>>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwordpress.cla
>>>>>> rku.edu%2fnthompson%2f&c=E,1,QOf1D-j5ArygoRcbG-DdV5AXv09vkA1414zTb
>>>>>> hQRu511cyjlZgpKAzluFC5aL6nMPLUA8Wi7J73gi60RKo85emJQKe6stzs14IZaNF_
>>>>>> xU8tj2p72Tno,&typo=1>
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> *From:* Friam <friam-bounces at redfish.com 
>>>>>> <mailto:friam-bounces at redfish.com>> *On Behalf Of *Marcus Daniels
>>>>>> *Sent:* Wednesday, September 1, 2021 11:26 AM
>>>>>> *To:* The Friday Morning Applied Complexity Coffee Group 
>>>>>> <friam at redfish.com <mailto:friam at redfish.com>>
>>>>>> *Subject:* Re: [FRIAM] aversive learning
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> I’m already convinced Dave is bot.  I know I am.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> https://www.theatlantic.com/technology/archive/2021/08/dead-intern
>>>>>> et-theory-wrong-but-feels-true/619937/ 
>>>>>> <https://www.theatlantic.com/technology/archive/2021/08/dead-inter
>>>>>> net-theory-wrong-but-feels-true/619937/>
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> *From:* Friam <friam-bounces at redfish.com 
>>>>>> <mailto:friam-bounces at redfish.com>> *On Behalf Of *Marcus Daniels
>>>>>> *Sent:* Wednesday, September 1, 2021 8:23 AM
>>>>>> *To:* The Friday Morning Applied Complexity Coffee Group 
>>>>>> <friam at redfish.com <mailto:friam at redfish.com>>
>>>>>> *Subject:* Re: [FRIAM] aversive learning
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Culture is online now, didn’t you hear?
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> *From:* Friam <friam-bounces at redfish.com 
>>>>>> <mailto:friam-bounces at redfish.com>> *On Behalf Of *Prof David West
>>>>>> *Sent:* Wednesday, September 1, 2021 8:12 AM
>>>>>> *To:* friam at redfish.com <mailto:friam at redfish.com>
>>>>>> *Subject:* Re: [FRIAM] aversive learning
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Glen quoted BC Smith:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> /"What does all this mean in the case of AIs and computer systems 
>>>>>> generally? Perhaps at least this: that it is hard to see how 
>>>>>> synthetic systems could be trained in the ways of judgment except 
>>>>>> by gradually, incrementally, and systematically enmeshed in 
>>>>>> normative practices that engage with the world and that involve 
>>>>>> thick engagement with teachers ('elders'), who can steadily 
>>>>>> develop and inculcate not just 'moral sensibility' but also 
>>>>>> intellectual appreciation of intentional commitment to the 
>>>>>> world."/
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> I read from (or into) this statement a position I have held via AI since I did my masters thesis in CS (AI) — computers cannot be intelligent in any general sense until and unless they participate in human culture. We automatically and non-consciously "enculturate" (normative practices that engage the world and involve thick engagement) our children.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> This is NOT education. Education is nothing more than a pale shadow of enculturation. Not more than 10% of the 'knowledge' in your head (knowledge about what to do and why and when and variations according to circumstance and context ....) was learned via any kind of formal education or training and yet it is absolutely essential and is the foundation for comprehending and utilizing the 10% you did learn formally.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Until we can enculturate our computers, we will never achieve general AI (or even any complete specialized AI.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> davew
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Wed, Sep 1, 2021, at 8:28 AM, uǝlƃ ☤>$ wrote:
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> UK judge orders rightwing extremist to read classic literature or 
>>>>>>> face
>>>>>> 
>>>>>>> prison
>>>>>> 
>>>>>>> 
>>>>>>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.theguardi
>>>>>>> an.com%2fpolitics%2f2021%2fsep%2f01%2fjudge-orders-rightwing-extr
>>>>>>> emist-to-read-classic-literature-or-face-prison&c=E,1,g1tu8gN8exH
>>>>>>> 7dpZc0QTrxyjWJ3vGev84tHuqJlXN-9dxCg2Pw7IQdaqkCeLZOhMJtP5bc1fImfQn
>>>>>>> IrBSpXv2t8Qfx7oTnvDZNNi4f3z4lA,,&typo=1 
>>>>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fwww.theguard
>>>>>>> ian.com%2fpolitics%2f2021%2fsep%2f01%2fjudge-orders-rightwing-ext
>>>>>>> remist-to-read-classic-literature-or-face-prison&c=E,1,qW1trpze41
>>>>>>> omNZ05xlXkKf0jedA-FAepguTqfn0NabKYmIxpGxplHNSNc5jMbwQQhYjYvJZH13Z
>>>>>>> mrOeHDeCW2cRNMtGN-ZmDeaundPTZ4yC_VMMjforkVh8tmi8,&typo=1>
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> I know several liberals who agree with the righties that vaccine 
>>>>>>> and
>>>>>> 
>>>>>>> mask mandates are bad, though not for the same reasons. Righties 
>>>>>>> yap
>>>>>> 
>>>>>>> about fascism and limits to their "freedom". But the liberals 
>>>>>>> talk
>>>>>> 
>>>>>>> about how mandates just push the righties further into their 
>>>>>>> foxholes,
>>>>>> 
>>>>>>> preventing collegial conversation.
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> So the story above is an interesting situation in similar style. 
>>>>>> 
>>>>>>> Renee', to this day, hates Shakespeare because she was forced to
>>>>>> 
>>>>>>> memorize Romeo and Juliet as a kid. Of course, she doesn't hate
>>>>>> 
>>>>>>> Shakespeare, because she hasn't read much Shakespeare. She just
>>>>>> 
>>>>>>> *thinks* she hates it because of this "mandate" she suffered under. 
>>>>>> 
>>>>>>> This court mandated "literature therapy" being imposed on this 
>>>>>>> kid
>>>>>> 
>>>>>>> could work, if he can read it sympathetically. But if he can't, 
>>>>>>> if he
>>>>>> 
>>>>>>> simply reads it "syntactically", what will he learn?
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> BC Smith, in his book "The Promise of AI", channels Steels & 
>>>>>>> Brooks [ψ]
>>>>>> 
>>>>>>> in writing:
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> "What does all this mean in the case of AIs and computer systems
>>>>>> 
>>>>>>> generally? Perhaps at least this: that it is hard to see how 
>>>>>>> synthetic
>>>>>> 
>>>>>>> systems could be trained in the ways of judgment except by 
>>>>>>> gradually,
>>>>>> 
>>>>>>> incrementally, and systematically enmeshed in normative practices 
>>>>>>> that
>>>>>> 
>>>>>>> engage with the world and that involve thick engagement with 
>>>>>>> teachers
>>>>>> 
>>>>>>> ('elders'), who can steadily develop and inculcate not just 
>>>>>>> 'moral
>>>>>> 
>>>>>>> sensibility' but also intellectual appreciation of intentional
>>>>>> 
>>>>>>> commitment to the world."
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> If we think of this kid, Ben John, as an AI, what will he learn 
>>>>>>> by
>>>>>> 
>>>>>>> mandating he read Dickens? Similarly, what are the mandate 
>>>>>>> protesters
>>>>>> 
>>>>>>> learning from our mandates? Stupidity should be painful. And the
>>>>>> 
>>>>>>> court's reaction to this kid's stupidity, the pain of reading 
>>>>>>> Pride and
>>>>>> 
>>>>>>> Prejudice, should teach that kid something. But which is the more
>>>>>> 
>>>>>>> dangerous stupidity? Which stupidity runs the risk of a more
>>>>>> 
>>>>>>> catastrophic outcome? Avoiding the vaccine? Or mandating vaccination?
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>>> [ψ] 
>>>>>>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdoi.org%2f10.
>>>>>>> 4324%2f9781351001885&c=E,1,MHpw8tC0Kk1yrAF8FBsLgZfkdHnPdvdokfLkgk
>>>>>>> vkFakrWR1-RxUZ1dQeH5neRnwKLdgFseyh930Ny3pEScV3-0rklHq6ZeiKT0jyu4A
>>>>>>> BnKqRGIA15y2bcWMGYg,,&typo=1 
>>>>>>> <https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fdoi.org%2f10
>>>>>>> .4324%2f9781351001885&c=E,1,KoZqerZXcm0TqCDjhbdEGpn39Oi7HbN9NsNEX
>>>>>>> 86ngDwm6sItmeN6A0FRtVI_S5ZkqiuACJjB5HQthkTXNbHwmT92UsKY4jCtDE_VsJ
>>>>>>> wHeQm_9kZ3iSrlT6c,&typo=1>
>> 
>> 
>> --
>> ☤>$ uǝlƃ
>> 
>> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
>> FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn 
>> GMT-6  bit.ly/virtualfriam un/subscribe 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailm
>> an%2flistinfo%2ffriam_redfish.com&c=E,1,wV1lgMNAnHGmYAHC1-MCC0EeewQ9-v
>> 5cPSrH9M60tYTSaooD0SZiYMtdMyfn5Sjog-a1RgvAA_tL6wn8oM6CmxJeo11cyT-OEeg2
>> mYnhM6elR6cBRCN_A1ShCwg,&typo=1 FRIAM-COMIC 
>> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspo
>> t.com%2f&c=E,1,PArnPExsCeC1pFV5ZgKbSaGsetZT_18onCzNykRFfIGHBebuJMu8wCE
>> ryL2KAJYKXV9AjZhkijkYCwoEv7GKcM8Q5YH8SjdjUU2R9L6GzPY9IhPr68Lu8w,,&typo
>> =1
>> archives: http://friam.471366.n2.nabble.com/
> 
> 
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,QKnW2j8KeQAQ6vBPf_ehbIzrMOIjTcGOpxyVkmPm3B316ispO5TdgQ-Qs3CDsXQTAZwp7ujYe6npwGbkH_Ek3bjhs_OW9B03XzGv7Tbv&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,zJdSlRBpP8MtZjzEEhO62KC7prCdnOV88CuV3MjR3_dCWx2x-eB3WrXlrHKVGSMFLwx8AYrSHiWCESR8JTbNR82yO9b6rY-VAImeV5OGxxtN4PfJfaWDaKs,&typo=1
> archives: http://friam.471366.n2.nabble.com/
> - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,L9bb2DCbwr3QPYDq4gTu2exIRnE7oq_7gtk1Chm_WsCu2QwWEYMWAoh94DmnY-VR42W2mqyWkZKg-dhNjW1xGaOyLdahG2HlmLb4OJS4cAQ_mndwQug5F8yhzi83&typo=1
> FRIAM-COMIC https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f&c=E,1,p8iDvxsQH-UjSniW5FERk2yR51pJ8F5NlJFdREDJ_D9v3jXc0KCchapuXVuBLuypvY2AQivE0ezMtC2Bx-gkC3KIyRZL3s_oT5fsVMNSMLci_rc,&typo=1
> archives: http://friam.471366.n2.nabble.com/




More information about the Friam mailing list