[FRIAM] anonymity/deniability/ambiguity

David Eric Smith desmith at santafe.edu
Wed May 27 00:15:25 EDT 2020


Thanks Glen,

Yes, me too.  In these very wide-ranging discussions, it is hard to say which aspect of the question I most want to get at.  I think it changes depending on whom I am listening to, that leaves something out, which I wish to say is all part of the same system as the parts they mention and thus can’t be left out.  But then forced to ask what _I_ most want to do, maybe I don’t know.  A random list, of things that are done and what they suggest as next steps not-yet done.

1. Shannon block coding, Gacc cellular automaton, done.  Beautiful in that they clarify that the idea of asymptotic error correction is large-deviation in origin, and that they show the nature of solutions through these nested-block structures, with weakening error correction capacities as scales increase, but even weaker error leakage out of the lower blocks, so the scaling still works.  They achieve this, however, by having all homogeneous components, and a very clear and externally imposed notion of “message” and “error”.  That is what makes them comprehensible and allows us to see the point, but makes them hard to apply except as metaphor to problems in behavior that seem like they should be similar.

2. The people who say “What rescues Science and makes it different is empiricism.”  The part I like is the invocation of a kind of Darwinism for concepts, meaning a mapping to Bayesian updating.  It puts a boundary between the rules of the world that a social-cognitive system doesn’t get to change, and the patterns and habits that they are allowed to freely innovate, and then tries to track the information flow through that boundary as the world constrains the behavioral patterns and habits.  What such a high-level gloss seems to leave out is that different organizations within language, and in the coupling of language to actions, are more or less good at taking in Bayesian suggestions.  It would only be in that different internal organization that “science” is a distinguishable branch of human communication and social cognition from other things people do, all of which are ultimately kept or lost by survival or extinction.  So I guess one wants to capture architectural aspects that distinguish those behavior systems.

3. The people whose emphasis on “empiricism” and “experiment” seems to underemphasize the role of formal systems for communication and reason.  I think the thing I want here, which is maybe most “me” in this is to get beyond the component-homogeneity in the Shannon-Fano or Gacs paradigms of 1, and the sense of having an articulated “goal” to be referred in 2.  I would like having a toy model that has behavioral error-correcting layers, and environmental-Darwinian layers, in which we could do an information accounting for which errors are trapped within layers and which must be caught with signals that flow between them.  I think what most disappoints me in the little MTV-attention-span models one has to write for academic papers is that they don’t get at this heterogeneity of components that interact, and yet within which a single joint distribution is being narrowed and stabilized.  A version of that comes up in my life/metabolism interests, another version comes up in economics, wishing to understand how non-cooperative individual level decision structure can have, as its outputs, not “payoffs”, but _actions in the world_ that amount to building the infrastructure that make coalitional-form games possible.  So something like what “embodied cogntiion” did for robotics: to get away from having all the symbols represent numbers or other symbols, and having more of them somehow represent things.

4. There remains the perennial problem of the phenomenologists.  They want to situate all of “reality” within “experience”, yet they insist they are not talking about introspection, and that they are not the modern incarnation of Descartes or even Russell when he says “sense data are immediate and everything else is mediated” [more or less].  That seems to me like another difference of kind, much as the talking/testing interaction is between things of different kind.  It becomes a tangle in my mind as I try to decide how many axes of difference are really at work here.  There seems to be one between intersubjectivity and subjective experience, where the former acts as a check on the latter.  But there may be a different one between structured action and speech, nominally serving coordination at the group level, but applicable reflexively toward oneself, distinguishing subjective from (however approximate) objective aspects of some experience.  Maybe it’s a Silence of the Lambs thing I want; just whatever will make their chattering in my head stop.

5. Somewhere in here, I keep thinking it would be nice to combine the talking/being/doing implementation of robotics with things we know about expressive power and reflexivity in formal languages, to get at the idea that a system’s actions together with utterances can have information about real categories, but not contain (at least within the symbol set alone) a representation of those categories.  To get at a sense that things can be meaningful, but language can be an unsuited medium to carry some of the dimensions of meaning.  Or that languages of different expressive powers can have different scopes for reflexivity.  I feel like this was behind Frank’s comments the other day that “it’s all grist for the mill’.  I have a badly hallucinatory image of a fixed point theorem, where the fixed point corresponds to “meaning”, but it isn’t carried necessarily “on” or “in” the patterns within any one component of the system, but rather somehow constructed from what they do together.  So the way to express the meaning is to be able to represent and solve the construction.

Actually, let me give up now.  I didn’t want to let this post from you go, because I am so much I agreement with it, but my list above is atrocious.  It starts okay, but gets more conceptually confused and jumbly as I go down.  Maybe in a different frame of mind I can think more clearly, and see an idea I could imagine putting time into.

Thanks,

Eric



> On May 27, 2020, at 7:59 AM, uǝlƃ ☣ <gepropella at gmail.com> wrote:
> 
> I really *want* to say something about building a machine (to be provocative) that implements a "reliable in the long-run without predicting the contents of reliable sentences" mechanism. I'm purposefully trying to elide your cognizing-social behavers in order to "flatten" the mechanism somewhat ... to root out the unspeakable-innerness-bogeyman, flatten the leaves of the graph, at least. This would still allow for hierarchy (even a very deep one), just without allowing for things that cannot be talked about.
> 
> I don't think it's all that useful to painstakingly knead Peirce's writings looking for a proto-structure, even though I often complain about people like Wolfram who consistently fail to cite those whose shoulders on which they stand. It would be more interesting to simply try to build a system that has some hint of the sought features. Here, I'm thinking of Luc Steels' robots playing language games. A simulator [†] of Ackley's work you mention, or even of something like the Debian package dependencies might approach it, too. (Marcus often raises branch prediction methods, which may also apply to some extent.) I can't help but also think of Edelman and Tononi's "neural darwinism" and Hoffman's "interface theory of perception". I mention these because they used mechanistic simulation as persuasive rhetoric, albeit purely justificationist -- i.e. little to no attempt to *falsify* the simulation mechanisms against data taken from an ultimate referent, please correct me if I'm wrong.
> 
> Along some similar lines, I've been exposed to (again, mechanistic/constructive) simulation of "innovation", wherein propositions about how/why seemingly unique phenomena like Silicon Valley (as a system) or particular disruptors like the iPhone emerge.
> 
> I don't find any of these machines compelling, though. So I can't really say anything useful in response to your post, except to say that it would be *great fun* to try to construct a self-correcting truth machine. It would be even more fun to construct several of them and have them compete and be evaluated against an implicit objective function.
> 
> 
> [†] Re: Jon's cite of Baudrillard's dissimulation, I (obviously) have to disagree with the dichotomy between [dis]simulation. To act as if you don't have something you do have requires you to use other things you do have to hide the something you're hiding. I'm struggling to say this concretely, though. In the trustafarian case, the spanging (dissimulation) couples well with the dreadlock wax (simulation). Can there be dissimulation without a complementary simulation? And if not, if they always occur together, then distinguishing them may not buy us much.
> 
> 
> On 5/21/20 4:14 PM, David Eric Smith wrote:
>> I use the stripped down form in the hope of building a recursive tree of mutual refereeing, for all elements of scientific practice, now appealing to my mental image of Peter Gacs’s error-correcting 1D cellular automaton, which does this by nesting correcting structure within correcting structure.  Then I can look for every aspect of our practice that is trying to play this role in some way.  A subset include:
>> 1. Intersubjectivity to guard against individual delusion, ignorance, oversight, and similar hazards.
>> 2. Experimentation to guard against individual and group delusion etc, and to provide an additional active corrective against erroneous abduction from instances to classes.
>> 3. Adoption of formal language protocols:
>> 3a. Definitions, with both operational (semantic) and syntactic (formalist) criteria for their scope and usage
>> 3b. Rigid languages for argument, including logic but also less-formal standards of scientific argument, like insistence on null models and significance measures for statistical claims
>> 
>> There must be more, but the above are the ones I am mostly aware of in daily work.
>> 
>> These are, to some extent, hierarchical, in that those further down the list are often taken to have a control-theoretic-like authority to tag those higher-up in the list as “errors”.  However, like any control system, the controller can also be wrong, and then its authority allows it to impose cascades of errors before being caught.  Hence, I guess Kant thought that a Newtonian space x time geometry was so self-evident that it was part of the “a priori” to physical reasoning. It was a kind of more-definite-than-a-definition criterion in arguments.  And it turned out not to describe the universe we live in, if one requires sufficient scope and precision.  Likewise, the amount of a semantics that we can capture in syntactic rules for formal speech is likely to always be less than all the semantics we have, and even the validity of a syntax could be undermined (Godel).  But most common in practice is that the syntax could be used as a kind of parlor entertainment, but the
>> interpretation of it becomes either invalid or essentially useless when tokens that appeared in it turn out not to actually stand for anything.  This is what happens when things we thought were operational definitions are shown by construction of their replacements to have been invalid, as with the classical physics notion of “observable”, or the Newtonian convention of “absolute time”.
>> 
>> I would like to give Pierce’s “truth == reliable in the long run” a modern gloss by regarding the above the way an engineer would in designing an error-correction system.  The instances that are grouped in the above list are not just subroutines in a computer code, but embodied artifacts and events of practice by living-cognizing-social behavers and reasoners.  And then decide from a post-Shannon vantage point what such a system can and cannot do.  What notions of truth are constructible?  How long is the long run, for any particular problem?  What are the sample fluctuations in our state of understanding, as represented in placeholders for terms, rules, or other forms we adopt in the above list in any era, relative to asymptotes that we may or may not yet think we can identify?  How have errors cascaded through that list as we have it now, and can we use those to learn something about the performance of this way of organizing science?  (Dave Ackley of UNM did a lovely
>> project on the statistics of library overhauls for Linux utilities some years ago, which is my mental model in framing that last question.)  Formal tools to answer more interesting versions of questions like those.
>> 
>> I mentioned some stuff about this in a post a month or two ago, and EricC included in a later post by way of reply that Pierce did a lot of statistics, so I understand I can’t take anything here outside the playpen of a listserve until I have first read everything Pierce wrote, and everything others wrote about what Pierce wrote, etc.  I suspect that, since Pierce lived before the publication of at least part of what is now understood about reliable error correction, large deviations, renormalization, automata theory, etc., there should be something new to say from a modern standpoint that Pierce didn’t already know, but that assertion is formalist, and thus valueless.  I have to do the exhaustive search through everything he actually did know, to point out something new that isn’t already in it (constructivist).  
> 
> 
> -- 
> ☣ uǝlƃ
> -- --- .-. . .-.. --- -.-. -.- ... -..-. .- .-. . -..-. - .... . -..-. . ... ... . -. - .. .- .-.. -..-. .-- --- .-. -.- . .-. ...
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> archives: http://friam.471366.n2.nabble.com/
> FRIAM-COMIC http://friam-comic.blogspot.com/ 




More information about the Friam mailing list