[FRIAM] (no subject)

David Eric Smith desmith at santafe.edu
Mon Jul 19 21:42:23 EDT 2021


Jon, hi,

I have owed you a response for a long time.  I think I kept imagining that, if I waited long enough, I would learn enough about a couple of things you asked to be able to understand the questions and perhaps answer usefully.  At this stage I think I am giving up any systematic hope of learning anything, and will consider myself lucky when random accidents result in my having learned something, which I find out about after the fact.  

What triggers my answer today is a specific question that was in some other email by you that I haven’t found, about what hypergraphs are and whether they are “a topology”, as I think you said it.  I didn’t understand the question, I think because I don’t have a mathematician’s familiarity for just what scope the term “topology” is allowed to cover.  So I know a few things from coursework, but they are just specific cases.  A friend has tried to get me to read this book:
https://www.amazon.com/Topology-Through-Inquiry-AMS-Textbooks/dp/1470452766 <https://www.amazon.com/Topology-Through-Inquiry-AMS-Textbooks/dp/1470452766>
which one of his junior colleagues is trying to walk him through to get him to understand a bit better.  Someday I will give time to properly read in it….

Anyway, what came up today was a Sean Carroll interview with Wolfram, which fronts hypergraphs as Wolfram’s base-level abstraction.  It is a couple hours long, so I will listen to it when I have a couple hours….
https://www.preposterousuniverse.com/podcast/2021/07/12/155-stephen-wolfram-on-computation-hypergraphs-and-fundamental-physics/ <https://www.preposterousuniverse.com/podcast/2021/07/12/155-stephen-wolfram-on-computation-hypergraphs-and-fundamental-physics/>
Maybe Wolfram will provide a more compact sense of the “why” and not just the definition.  The little bit, of general reading, that I tried to do but did not need for my particular applications, was in this:
https://link.springer.com/chapter/10.1007/978-1-4757-3714-1_3 <https://link.springer.com/chapter/10.1007/978-1-4757-3714-1_3>

For me, a hypergraph is just a natural representation for a variety of models of transitions that involve joint transformations.  I think there is another way of capturing dualities between states and transitions in what are called “bond graphs”, which it turns out Alan Perelson did some work on when he was young.  I think various projections of the even larger generality permitted within bond graphs will reduce you to hypergraph models of the relation of states to transitions.  As I would ordinarily use the term, a hypergraph could be said to “have” a topology in the usual discrete sense of characterizing types of nodes and links and then giving a table with their adjacencies.  But I don’t know what it means to say it “is” a topology.  Apologies that I do not know how to engage better with what you are trying to get me to understand.

Your other email that I saved because I hadn’t answered follows, so I will try to do that one now too.  I will clip and lard, in reply below:

> On May 6, 2021, at 1:44 AM, jon zingale <jonzingale at gmail.com> wrote:

> 1. Food webs were analyzed as weighted graphs with the obvious Markov
> chain interpretation[ρ]. Each edge effectively summarizing the complex
> predator-prey interactions found at level 2, but without the plethora
> of ODEs to solve.
> 
> 2. N-species Lotka-Volterra, while being a jumble of equations, offered
> dynamics. Here, one could get insight into how the static edge values
> of level 1 were in fact fluctuating values in n-dimensional phase
> space. But still, one is working with an aggregate model where species
> is summarized wholly by population count.

> 2'. "There is still an algebra of operation of reactions, but it is
> simpler than the algebra of rules, and mostly about counting."
> 
> I am not entirely sure that I follow the distinction. Am I far off in
> seeing an analogy here to the differences found between my one and two
> above?

I don’t think that relation, but one that goes in the other direction from heterogeneity to homogenetty.  I will say the specific thing I mean, because those last few words could have meant anything:

The order-of-application dependence in rules seems to me capable of vast diversity of kinds.  Rules perform changes of patterns in context, but the presence of the context as part of that relation implies that rules are also creative.  To be specific: in chemistry, which is the best-constrained case, a rule takes a collection of atomic centers and a bond configuration, keeps the atomic centers, and replaces the initial bond configuration with some new one.  But these motifs of atoms and bond configurations occur within the context of entire molecules, which can contain much else besides the part that the rule is conditioned on or transforms.  So by changing some bonds in a molecule and preserving the rest of the molecule, the rule actually can create entirely new patterns that draw partly on the conserved part and partly on the changed part, which were not in the system before.  Those newly created patterns can be contexts in which other rules suddenly are able to act where they were not before.  So the commutativity diagram, of which rules become able to act only due to the working of other rules previously — either on a particular molecule or _at all_ in the system — could be of almost any type.  If we were working, not with chemistry, but with site graphs as Kappa was created to handle, the dependencies and transformations could consist of anything the systems biologist can imagine and formalize.  

The dependencies, of which actions of rules on instances depend on those instances’ having been created by other rules, are what Fontana et al. collect in the event sequences they call “stories” as a representation of causation.  

The open-ended character of what rules might do, and thus how they might depend on each other, makes the rule level incredibly powerful because it is generative, but also makes it a really hard level about which to say anything general.  It is handy that, in systems we often want to study, the number of rules actually active tends to be limited; probably because we study systems where some sort of selection had efficacy, and selection also doesn’t do well making inferences from such mechanistic heterogeneity that every case is sui generis.  

In comparison, the commutator dependency of the generator acting on the state space is only of one kind (for the population processes that are the scope of my comments here).  The only dependency is counting. For a transition to occur, the multiset of its inputs must exist.  If they are not present by default from some background or boundary condition, then they must have been brought into existence by some other transformations.  That counting dependence is the origin of the importance of feedbacks like autocatalysis.  But because it is all of more or less the same kind, I can tolerate its being much more ubiquitous without losing the ability to draw inferences about the system as a whole.

> 3'. "So the state space is just a lattice. The “generator” from Level 2
> is the generator of stochastic processes over this state space, and it
> is where probability distributions live."
> 
> Please write more on this. By 'just a lattice' do you mean integer-valued
> on account of the counts being so?

Yes, exactly.  Just counts (again, intended only to refer to this class of population processes).  

> Is the state space used to some
> extent, like a modulii/classifying space, for characterizing the
> species of reactions? I feel the fuzziest on how this level and the
> 2nd relate.

Yes, sorry.  Maybe good here if I backtrack and say ignore for a moment everything said above by me today, and just start cold.

1. Rules are finite but generative.  Examples are the _mechanisms_ of chemical reaction, defined at the level of the few atomic centers and bonds they act on, and _not_ on the level of fully-specified molecules.  

2. The thing that rules generate is a hypergraph of species and transformations.  In chemistry, the species are fully-specified molecule _identities_, and the transformations are the collections of reactants and products interconverted by a reaction.  The species specifications in the hypergraph add enough context beyond the rules that they do refer to fully-formed molecules.  But there is still context they omit: only the _identities_ of the species are marked as nodes in the hypergraph; it does not matter what their counts are in a given instance of some population.  Because any given list of mechanisms might turn out to generate indefinitely many distinct kinds of molecules, the hypergraph may be infinite or indefinitely large in extent, even from a finite rule set.  However, because it does not distinguish counts in the states as a whole, it still defines a very large equivalence relation over states. 

3. The state space (the lattice) is then, as you say, a vector with non-negative integer coefficients that gives the number of counts of every type of molecule.  Even if a generator were finite, since it could act on systems with any vectors of counts, the number of states is in general infinitely larger than the number of species and reactions in the hypergraph. 

All best,

Eric

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20210720/cbf671bc/attachment.html>


More information about the Friam mailing list