[FRIAM] What is an agent [was: Philosophy and Science}

glen gepropella at gmail.com
Tue Jul 18 11:45:39 EDT 2023


Hm. It seems to me that self-attention allows a kind of circularity, where dependence on ancestors (or a built environment, or whatever "deep" structure exists in one's current context) does not. Nick's diatribe against teleonomy back in '87 focuses on circularity and, I suspect, would appeal to those of us who think circularity is a bug. I suppose it's reasonable to think of the sequential versus parallel inputs into such a beast and make the analogy of the parts that depend (crucially) on iteration as "fundamental" and those that depend (crucially) on synchrony/now/instantaneous as "ordinary". And I guess that's why Eric brought it up. A weakly stable thing like a hurricane seems to depend more on now-ness and less on any kind of deep structure memorized in the weather. But we might claim the structure, size, geology, of the earth as a fundamental driver for any *particular* hurricane (though not the abstract class of hurricane-like things). So, the "fundamental" conditions serve as a long-term memory structure or a canalization, where structured here-and-now objects serve as a proxy for distant past, high order Markovian things.

This might imply that the registration of an agent is simply a forgetting, a way to truncate the impossibly long/big details we would have to yap about in order to treat it as non-agent. I.e. agency is a convenient fiction, a shortcut, similar to free will.

But unlike some, I'm not allergic to circularity and don't tend to believe that unrolled iteration is identical to rolled iteration. Even if self-attention in transformers is ultimately unrollable (which I think it is), it still allows for a kind of circularity upon which one might be able to found agency (or "anti-found" anyway >8^D).

On 7/17/23 21:27, Roger Critchlow wrote:
> 
> 
> On Mon, Jul 17, 2023 at 2:35 PM David Eric Smith <desmith at santafe.edu <mailto:desmith at santafe.edu>> wrote:
> 
>     [...] [Yoshi Oono's The Nonlinear World]
>     in which he argues that the phenomena you mention are only “pseudo-complex”.  Yoshi, like David but with less of the predictable “Darwin-was-better; now what subject are we discussing today?” vibe, argues that there is a threshold to “true complexity” that is only crossed in systems that obey what Yoshi calls a “Pasteur principle”; they are of a kind that effectively can’t emerge spontaneously, but can evolve from ancestors once they exist.  He says (translating slightly from his words to mine) that such systems split the notion of “boundary conditions” into two sub-kinds that differ qualitatively.  There are the “fundamental conditions” (in biology, the contents of genomes with indefinitely deep ancestry), that mine an indefinite past sparsely and selectively, versus ordinary “boundary conditions”, which are the dense here-and-now.  The fundamental conditions often provide criteria that allow the complex thing to respond to parts of the here-and-now, and ignore other
>     parts, feeding back onto the update of the fundamental conditions.
> 
>     I don’t know when I will get time to listen to David’s appearance with Sean, so with apologies cannot know whether his argument is similar in its logic.  But Yoshi’s framing appeals to me a lot, because it is like a kind of spontaneous symmetry breaking or ergodicity breaking in the representations of information and how they modulate the networks of connection to the space-time continuum.  That seems to me a very fertile idea.  I am still looking for some concrete model that makes it compelling and useful for something I want to solve.  (I probably have written this on the list before, in which case apologies for being repetitive.  But this mention is framed specifically to your question whether one should be disappointed in the demotion of the complexity in phenomena.)
>     [...]
>>     On Jul 18, 2023, at 4:37 AM, Stephen Guerin <stephenguerin at fas.harvard.edu <mailto:stephenguerin at fas.harvard.edu>> wrote:
>>
>>     [...]
>>
>>      1. Teleonomic Material: the latest use by David Krakauer on Sean Carroll's recent podcast <https://www.preposterousuniverse.com/podcast/2023/07/10/242-david-krakauer-on-complexity-agency-and-information/> in summarizing Complexity. Hurricanes, flocks and Benard Cells according to David are not Complex, BTW. I find the move a little frustrating and disappointing but I always respect his perspective.
> 
> Okay, I listened to the podcast.
> 
> DK says that real complexity starts with teleonomic matter, also known as particles that think.  He says that such agents carry around some representation of the external world.  And then the discussion gets distracted to other topics, at one point getting to "large language model paper clip nightmares".
> 
> My response to Eric's description of Oono's  "Pasteur principle" was that it sounds a lot like "Attention Is All You Need" (https://arxiv.org/pdf/1706.03762.pdf <https://arxiv.org/pdf/1706.03762.pdf>), the founding paper of the Transformer class of neural network models.
> 
> The "fundamental conditions" in a Transformer would be the trained neural net which specifies the patterns of attention and responses learned during training.  The "ordinary conditions" would be the input sequence given to the Transformer.  The Transformer breaks up the input sequence into attention patterns, evaluates the response to the current set of input values selected by the attention patterns,  emits an element to the output sequence, and advances the input cursor.
> 
> Anyone else see the family resemblance here?


-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ


More information about the Friam mailing list