[FRIAM] lurking

uǝlƃ ☤>$ gepropella at gmail.com
Tue Nov 2 12:08:52 EDT 2021


Your "larding" is irritating. So, I'll respond in bullets, forming what kindasorta looks like a coherent response. But it's just a busyness trick, promulgated by busyness people. I encourage you to formulate your posts as coherent wholes and deny your your hedonic "larding" impulse.

• I have no antipathy for behaviorism. I *am* a behaviorist, at least in principle. You may think I have antipathy for it because I've done a good job steelmanning my opposition. Sorry about that. It happens all the time. It's me, not you.

• My usage of "self" relies on scoping. If we can't talk about scope, then we can't talk about self.

• sense vs know - No, the word "know" is hopelessly useless. Just stop using it.

• the hand that turns itself off - No, that's not a loop because there's only 1 iterate. This is one of the reasons finite state machines are incomplete models without a controller or a clock. I've been trying to find a way to ask such a question about hypergraphs. But my ignorance prevents me.

• causal vs regulatory loop - "Causal" is more primitive than "regulatory". If you want to start with "regulatory", then you'll have to define "causal" in terms of "regulation". In my experience, regulation cuts a thing into 2 parts: the system and the regulator. So cause is a prerequisite to regulation. But I'm happy to play some other game if you set up the rules.

• uncanniness - Yes, it does tell us something about the circuitry. Uncanniness is a phenomenon, generated by a generator. We can study gen-phen maps both forward and inverse. By comparing the behavior (phenomenal repertoire) of a single machine, under different conditions, we can infer some properties of its gen-phen map, which constrains the properties of the generator.


On 11/2/21 8:39 AM, thompnickson2 at gmail.com wrote:
> You may accuse me of trolling in what follows, or being manipulatively stupid, but honestly I do not understand what you are saying and would LIKE to understand it.   Please see larding, below.  Of course I may not understand my own motives. 
> 
>  
> 
> Nick Thompson
> 
> ThompNickSon2 at gmail.com
> 
> https://wordpress.clarku.edu/nthompson/
> 
>  
> 
> -----Original Message-----
> From: Friam <friam-bounces at redfish.com> On Behalf Of u?l? ?>$
> Sent: Monday, November 1, 2021 4:20 PM
> To: friam at redfish.com
> Subject: Re: [FRIAM] lurking
> 
>  
> 
> Literal self-awareness is possible. The flaw in your argument is that "self" is ambiguous in the way you're using it. It's not ambiguous
> 
> */[NST===>I guess I need to understand your usage.  <===nst] /*
> 
> in the way me or Marcus intend it.
> 
> You can see this nicely if you elide
> 
> */[NST===>i.e., “remove”?<===nst] /*
> 
>  "know" from your argument.  We know nothing. */[NST===> I can agree, for some values of the word, that we know nothing, but isn’t that  the same world in which we sense nothing?  <===nst] /*
> 
> The machine knows nothing. Just don't use the word "know" or the concept it references.  There need not be a model involved, either, only sensors and things to be sensed.
> 
> */[NST===>So, your antipathy for behaviorism not withstanding,  this feels like a hyper behaviorist position you are adopting.  Way beyond mine.  Let’s talk about that toy which involves a hand that comes out and turns off the switch that governs it.  This is a loop, right?  Is there sensing going on here?  <===nst] /*
> 
>  
> 
> Self-sensing means there is a feedback loop between the sensor and the thing it senses. So, the sensor measures the sensed and the sensed measures the sensor. That is self-awareness. There's no need for any of the psychological hooha you often object to. There's no need for privileged information *except* that there has to be a loop. If anything is privileged, it's the causal loop.*/[NST===>Well, I would start with the regulatory loop.  <===nst] /* 
> 
>  
> 
> The real trick is composing multiple self-self loops into something resembling what we call a conscious agent. We can get to the uncanny valley with regular old self-sensing control theory and robotics. Getting beyond the valley is difficult:
> 
> */[NST===>Oh, getting into the uncanny territory is no problem.  Practically anything that stands up on its hind legs (or wheels) and looks us in the eye is uncanny.  But uncanniness doesn’t tell us anything about the circuitry we are looking at, does it?  It might tell us something about our circuitry. <===nst] /*
> 
> https://youtu.be/D8_VmWWRJgE <https://youtu.be/D8_VmWWRJgE> A similar demonstration is here: https://youtu.be/7ncDPoa_n-8 <https://youtu.be/7ncDPoa_n-8>
> 
>  
> 
>  
> 
>  
> 
> On 11/1/21 2:08 PM, thompnickson2 at gmail.com <mailto:thompnickson2 at gmail.com> wrote:
> 
>> In fact, strictly speaking, I think literal self-awareness is impossible.  Because, whatever a machine knows about itself, it is a MODEL of itself based on well situated sensors of its own activities, just like you are and I am.  There is no privileged access, just bettah or wussah access.


-- 
"Better to be slapped with the truth than kissed with a lie."
☤>$ uǝlƃ



More information about the Friam mailing list