[FRIAM] The Possibility of Self Knowledgke

Prof David West profwest at fastmail.fm
Mon Nov 8 14:35:18 EST 2021


your second paragraph is a nice channeling of Rupert Sheldrake — minus the morphogenesis.

davew

On Mon, Nov 8, 2021, at 12:17 PM, uǝlƃ ☤>$ wrote:
> Both your and SteveS' comments address scope/extent directly. Nick 
> refuses to do that. I don't know why.
>
> If we allow for a spectrum of scope, we can say that the fast/small 
> loops "within self" provide those images, sounds, emotions you 
> experience in the deprivation tank. Even if they were programmed in, in 
> part, by experiences outside the tank, they keep "ringing" (standing 
> wave, persistent cycles) while in the tank. Like a tuned stringed 
> instrument, properties of your body/brain facilitate some tones over 
> others. If your body is grown over generations to "hold" some tones, 
> then that could be the source of the Jungian archetypes that continue 
> ringing under deprivation. And, arguably, those tones will ring longer 
> and louder than more transient ones learned before going into the tank, 
> that your body/brain aren't as effective/efficient at maintaining.
>
> Just outside the fast/small loops might be medium loops like dream 
> journaling, meditation, or exercise. That may extend to family or 
> regular contact with some things in the world. The SteveS' extended 
> mind might extend out slow/large loops like *knowing* that you have 
> your smartphone and can use Google at any given time, or *expecting* 
> that a city you're arriving to for the first time will have things like 
> overpasses and coffee shops, not only because your prior experiences 
> have programmed that in, but because you know other humans, with 
> similar bodies/brains (and archetypes) built those cities.
>
> Traveling to a completely foreign city like Pyongyang will expose 
> "other" not-self in the same way trying to learn a new game or sport 
> will expose "other" not-self.
>
> A discussion of self is meaningless without a discussion of scope.
>
> On 11/8/21 10:41 AM, Prof David West wrote:
>> /"What inputs do we use to infer facts about our selves?"/
>> 
>> Key word is _inputs_ which implies an external origin, input from somewhere other than the "self."
>> 
>> Consider the experience of LSD while in a sensory deprivation tank — a real one, not the relaxation type offered at spas.
>> 
>> There are no 'inputs' via any sensory channel — at least none above the conscious awareness threshold.
>> 
>> Yet the mind is filled with images, sounds, emotions ...  From whence they come? Harder challenge, what is the origin of perceived "sacred symbols"and Jungian archetypes that are perceived in this situation? Species memory or the "collective unconscious" perhaps?
>> 
>> davew
>> 
>> 
>> On Mon, Nov 8, 2021, at 11:23 AM, thompnickson2 at gmail.com <mailto:thompnickson2 at gmail.com> wrote:
>>>
>>> Hi, Steve,
>>>
>>>  
>>>
>>> So, the question which dominates self-perception theory is, What inputs do we use to infer facts about our selves?  Often they are precisely NOT those inputs that a privileged access theory would predict.
>>>
>>>  
>>>
>>> Nick Thompson
>>>
>>> ThompNickSon2 at gmail.com <mailto:ThompNickSon2 at gmail.com>
>>>
>>> https://wordpress.clarku.edu/nthompson/ <https://wordpress.clarku.edu/nthompson/>
>>>
>>>  
>>>
>>> *From:* Friam <friam-bounces at redfish.com> *On Behalf Of *Steve Smith
>>> *Sent:* Monday, November 8, 2021 11:17 AM
>>> *To:* friam at redfish.com
>>> *Subject:* Re: [FRIAM] The Possibility of Self Knowledgke
>>>
>>>  
>>>
>>> Nick -
>>>
>>> I was contemplating this very question this morning in an entirely different context, though I am sure my lurking on these threads has informed my thinking as well.
>>>
>>> It seems to me that the question of "self" is central as suggested/asserted by Glen (constantly?) and my current apprehension of a more better notion of self involves the superposition of what most of us would consider an extended self.  Extended in many dimensions, and with no bound other than a practical one of how expansive our apprehension can be.   We are not just the "self" that we have become over a lifetime of experiences, but the "self" that exists as standing wave in our geospatial embedding (a flux of molecules flowing through us, becoming us, being shed from us, etc), our entire filial relations with other organisms (our pets, domesticates, food-sources, scavengers of our food, etc, as well as our microbiome, up to perhaps macroscopic parasites such as worms and lice we may harbor).   We are also the sum of our social relations and affiliations with other humans and their constructs (Democrat party, Proud Boys, Professional Poker League of America, etc).    We
>>> are in relation to objects and creatures and other sentients which we are not at all (or only barely aware of)... we are products of growing up in, or living in currently a landscape, a cityscape, etc.)
>>>
>>> I know this is somewhat oblique/tangential/orthogonal to the point you are making, but I nevertheless felt compelled to make it here as it schmears the question of self-knowledge in a (I believe)  significant way.
>>>
>>> - Steve
>>>
>>> On 11/7/21 10:33 PM, thompnickson2 at gmail.com <mailto:thompnickson2 at gmail.com> wrote:
>>>
>>>     Eric inter alia,
>>>
>>>      
>>>
>>>     The position I have taken concerning self knowledge is that all knowledge is of the form of inferences made from evidence.  To the extent that some sources of knowledge may lead to better inferences-- may better prepare the organism for what follows--  some may be more privileged than others, but that privilege needs to be demonstrated.  Being in the same body as the knowing system does not grant  the  knowing system any */a priori/* privilege.  If you have followed me so far, then a self-knowing system is using sensors to infer (fallibly) the state of itself.  So if Glen and Marcus concede that this is the only knowledge we ever get about anything, than I will eagerly concede that this is “self-knowledge”.  It’s only if you claim that self-knowing is of a different character than other-knowing, that we need to bicker further.  I stipulate that my point is trivial, but not that it’s false. 
>>>
>>>      
>>>
>>>     I have cc’d bits of the thread in below in case you all have forgotten.  I could not find any contribution from Eric in this subject within the thread, although he did have something to say about poker, hence I am rethreading.
>>>
>>>      
>>>
>>>     Nick . 
>>>
>>>      
>>>
>>>      
>>>
>>>      
>>>
>>>     Nick Thompson
>>>
>>>     ThompNickSon2 at gmail.com <mailto:ThompNickSon2 at gmail.com>
>>>
>>>     https://wordpress.clarku.edu/nthompson/ <https://wordpress.clarku.edu/nthompson/>
>>>
>>>     18
>>>
>>>
>>>           uǝlƃ ☤>$via <https://support.google.com/mail/answer/1311182?hl=en> redfish.com 
>>>
>>>     	
>>>
>>>     Nov 1, 2021, 4:20 PM (6 days ago)
>>>
>>>
>>>     	
>>>     	
>>>     	
>>>
>>>     	
>>>     	
>>>
>>>     	
>>>     	
>>>     	
>>>
>>>      
>>>
>>>     to friam
>>>
>>>
>>>     Literal self-awareness is possible. The flaw in your argument is that "self" is ambiguous in the way you're using it. It's not ambiguous in the way me or Marcus intend it. You can see this nicely if you elide "know" from your argument.  We know nothing. The machine knows nothing. Just don't use the word "know" or the concept it references.  There need not be a model involved, either, only sensors and things to be sensed.
>>>
>>>     Self-sensing means there is a feedback loop between the sensor and the thing it senses. So, the sensor measures the sensed and the sensed measures the sensor. That is self-awareness. There's no need for any of the psychological hooha you often object to. There's no need for privileged information *except* that there has to be a loop. If anything is privileged, it's the causal loop.
>>>
>>>     The real trick is composing multiple self-self loops into something resembling what we call a conscious agent. We can get to the uncanny valley with regular old self-sensing control theory and robotics. Getting beyond the valley is difficult: https://youtu.be/D8_VmWWRJgE <https://youtu.be/D8_VmWWRJgE> A similar demonstration is here: https://youtu.be/7ncDPoa_n-8 <https://youtu.be/7ncDPoa_n-8>
>>>
>>>
>>>     Attachments area
>>>
>>>     Preview YouTube video Realistic and Interactive Robot Gaze <https://www.youtube.com/watch?v=D8_VmWWRJgE&authuser=0>
>>>
>>>     <https://www.youtube.com/watch?v=D8_VmWWRJgE&authuser=0>
>>>
>>>     <https://www.youtube.com/watch?v=D8_VmWWRJgE&authuser=0>
>>>
>>>      
>>>
>>>     Preview YouTube video Mark Tilden explaining Walkman (VBug1.5) at the 1995 BEAM Robot Games <https://www.youtube.com/watch?v=7ncDPoa_n-8&authuser=0>
>>>
>>>     <https://www.youtube.com/watch?v=7ncDPoa_n-8&authuser=0>
>>>
>>>     <https://www.youtube.com/watch?v=7ncDPoa_n-8&authuser=0>
>>>
>>>      
>>>
>>>      
>>>
>>>
>>>           Marcus Danielsvia <https://support.google.com/mail/answer/1311182?hl=en> redfish.com 
>>>
>>>     	
>>>
>>>     Nov 2, 2021, 8:37 AM (5 days ago)
>>>
>>>
>>>     	
>>>     	
>>>     	
>>>
>>>     	
>>>     	
>>>
>>>     	
>>>     	
>>>     	
>>>
>>>      
>>>
>>>     to The
>>>
>>>
>>>     My point was that the cost to probe some memory address is low.   And all there is, is I/O and memory. 
>>>
>>>      It does become difficult to track thousands of addresses at once:  Think of a debugger that has millions of watchpoints.   However, one could have diagnostics compiled in to the code to check invariants from time to time.   I don't know why Nick says there is no privilege.   There can be complete privilege.   Extracting meaning from that access is rarely easy, of course.  Just as debugging any given problem can be hard.
>>>
>>>
>>>
>>>           uǝlƃ ☤>$via <https://support.google.com/mail/answer/1311182?hl=en> redfish.com 
>>>
>>>     	
>>>
>>>     Nov 2, 2021, 9:06 AM (5 days ago)
>>>
>>>      
>>>
>>>
>>>     	
>>>
>>>      
>>>
>>>     to friam
>>>
>>>
>>>     Well, I could be wrong. But both Nick and EricC seem to argue there's no privilege "in the limit" ... i.e. with infeasibly extensible resources, perfect observability, etc. It's just a reactionary position against those who believe in souls or a cartesian cut. Ignore it. >8^D
>>>
>>>     But I don't think there can be *complete* privilege. Every time we think we come up with a way to keep the black hats out, they either find a way in ... or find a way to infer what's happening like with power or audio profiles.
>>>
>>>     I don't think anyone's arguing that peeks are expensive. The argument centers around the impact of that peek, how it's used. Your idea of compiling in diagnostics would submit to Nick's allegation of a *model*. I would argue we need even lower level self-organization. I vacillate between thinking digital computers could [not] be conscious because of this argument; the feedback loops may have to be very close to the metal, like fpga close. Maybe consciousness has to be analog in order to realize meta-programming at all scales?
>>>
>
> -- 
> "Better to be slapped with the truth than kissed with a lie."
> ☤>$ uǝlƃ
>
> .-- .- -. - / .- -.-. - .. --- -. ..--.. / -.-. --- -. .--- ..- --. .- - .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn UTC-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:
>  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/



More information about the Friam mailing list