[FRIAM] maximally-stateful versus purely-functional: some thoughts on diffusion-limited aggregration

Marcus Daniels marcus at snoutfarm.com
Wed Sep 9 15:51:54 EDT 2020


A distinction I would make is between simulations that track their state carefully versus those that fail to declare who mutates what.  This matters more in parallel simulations when two branches of execution can occur at once and it is possible for contention and incomplete updates.   A functional program doesn't need to have mechanisms for detailed access controls over the members in an class because one can see just by comparing function types whether it is possible for interference to occur.  f(Alice) and f(Bob) can run at once but f(Party,Alice) and f(Party,Bob) may need to be run like f(f(Party,Bob),Alice) if f returns a modified Party.

-----Original Message-----
From: Friam <friam-bounces at redfish.com> On Behalf Of u?l? ???
Sent: Wednesday, September 9, 2020 10:20 AM
To: FriAM <friam at redfish.com>
Subject: Re: [FRIAM] maximally-stateful versus purely-functional: some thoughts on diffusion-limited aggregration

I'm enjoying this conversation. It strikes me that, often anyway, the core concept is that of modeling, one thing acting *as if* it is another thing. Techniques like lazy evaluation and minimized side effects play a prominent role in one's choice of which "platform" or paradigm to use to model (including all 3 styles: simulate, emulate, or replace) some referent. Accidents, abuse, and creativity seem facilitated by state, whereas safety (though paradoxically not security) seems facilitated by function. I think one of the more difficult domains in which to have this discussion is epidemiology, where counterfactual methods for discovering/exploiting causality are difficult to execute. (E.g. predictions of COVID-19 deaths in rural areas "fail" because preventative measures based on those "false" predictions succeed.) A purely functional simulation, engineered to abstract out the (stateful) details cannot help evaluate counterfactual scenarios. Traditional AI/ML has the same problem. Models fail extensionally because they're intensionally unfaithful. Or in my preferred language, behavioral analogy fails because of a lack of structural analogy.

On 9/9/20 8:24 AM, Roger Frye wrote:
> Problems where the process is at least as important as the result vary greatly, so multiple types of implementation might apply.
> 
> I think of Eugenio Moggi's monads as a way to cheat in functional programming by encapsulating extra state. Same with monads in category theory augmenting functors with data type. Same with posets as a way of associating structure onto undifferentiated sets. Same with closures in Lisp to bind the environment with a supposedly pure lambda function in order to have different behaviors.
> 
> Memorizing and other forms of caching also bind external data, here with the specific goal of skipping repeated calculations or data transfers.
> 
> I have used Petri nets to optimize multiple resources in a system of 
> elevators and to assign the parallel use of components of a CDC 1604 computer. PROLOG is often applied to conundrum type problems. Same with Bayesian networks. In all these cases, precisely known logic rules drive the search for an optimal solution.
> 
> Another approach in algorithmically intractable situations especially where the constraints are probability distributions is to search for hidden parameters given external structure with Markov Chain Monte Carlo simulation. Reinforcement learning is another example of searching a large process space.
> 
> I lean toward distributed computations like swarms of evolving agents as a way of incorporating state and path dependence. I like the way random connections in reservoir computing mechanisms like liquid state machines and echo state networks can navigate chaotic processes and can be quickly retrained for different constraints.
> 
> Finite element analysis could be adapted to solve diffusion limited aggregation or related problems. The "grid" would adapt during the simulation to work at the frontier of the bound and free states. This could apply more generally to Ising models and cellular automata where the computational focus is local and usually sparse.
> 
> I say finite element "grid" because that has been the traditional way to cover large computational surfaces as in vector computing or GPUs or Tensor Processing Units, but more appropriate structures for local activity would be ragged arrays or other types of sparse structures implementing quasi-crystaline patterns or random graphs.
> 
> Quantum simulation of molecular networks seems like the ultimate way to distribute computation and state with infinite numbers of paths.
> 
> 
> On Tue, Sep 8, 2020, 8:57 PM Jon Zingale <jonzingale at gmail.com <mailto:jonzingale at gmail.com>> wrote:
> 
>     RogerF,
> 
>     What I am calling /maximally-stateful/ is a limiting notion opposed to
>     that of /purely functional/. Effectively, a /maximally stateful/ process
>     is one whose side-effects dominate the computation, with little else
>     analytically calculable. At present, I am thinking of things like clouds,
>     cellular automata, or the geological history of a mountain. I am thinking
>     about computations where: the term /process/ is more apt than /function/,
>     computations whose specification may be trivial and yet depend crucially
>     on their initial conditions, the evolving /process/ manifests in bouts of
>     combinatorial explosions, the /intension/ matters more than the /extension/,
>     and any particular run is unambiguously path-dependent.
> 
>     Some time ago, on the list, Glen mentioned diffusion-limited aggregation
>     (DLA) in the context of our on-going discussions centered around modal
>     logic, determinism, and epiphenomena. Glen suggested DLAs as an example
>     of where the conceptual cleanliness of algebraic thinking, categorical
>     thought, and /purely functional/ programming may be disadvantageous to the
>     reasoner. Part of his criticism focuses on insufficient characterization
>     by way of Poset-centric descriptions. Inspired by Glen's skepticism, I
>     set out to explore what forms /purely functional/ diffusion-limited
>     aggregation may take. To a first approximation, the computation may
>     be understood as a function taking a pair of lists to a pair of 
> lists,
> 
>     DLA1 :: ([Free], [Bound]) -> ([Free], [Bound]),
>     where Free and Bound are type synonymous with a general particle type.
> 
>     Initially, the collection [Bound] consists of stationary germs and the
>     collection [Free] consists of random walking particles. As freely moving
>     particles enter the neighborhoods of bound particles, the free particles
>     themselves become bound. Functionally, we can interpret the function as a
>     process taking elements from the left list to elements of the right list.
>     However, if this were all the was to say about the problem then we could
>     be done with it and simply write:
> 
>     DLA1 (fs, bs) = ([], fs ++ bs)
> 
>     However, this isn't the case. We want more from a DLA, namely its state.
>     We are not just concerned with the fact /that/ free particles become bound,
>     but with /how/ these particles become bound, the /intensional/ content of the
>     computation, its stateful nature. Further, we may concern ourselves with
>     sensitivity to path-dependence, the manifested form's /being/ and /becoming/
>     as a historical fact.
> 
>     To address these questions consider a dual interpretation. Rather than
>     considering random walks on free particles, we can assign probability
>     distributions to the bound particles (the surface-bound particles having
>     non-zero probability). At every time step, each cell has some probability
>     of /capturing/ a free particle, and the collection of bound particles
>     grows in this way[*]. As the surface changes, the distributions over the
>     entire surface are recalculated, and from this perspective, it is clear
>     that the combinatorial species determines future probabilities. To my
>     mind, this treatment begs the question, "What might be gained from
>     structuring the space of all possible bound states as a Poset"? In this
>     interpretation, any DLA can be given /extensionally/ as a path (really a
>     lattice of 'evolutions satisfying boundary conditions') from some germ
>     state to some final state.
> 
>     From a /purely functional/ perspective, it is natural (since Moggi) to
>     structure stateful computations in terms of monads and free algebras[◻].
>     There are a number of ways that monadic interpretations may arise in
>     this context: The State monad to accumulate data separate from function,
>     the Giry monad to structure a Markov transition system, or perhaps a
>     monadic pointed space to structure the encapsulation of bound particles.
>     While the first two are likely essentially-necessary, the last points to
>     tempting but possibly exogenous interpretation. How does one know when
>     they have accurately characterized the content of a theory? How does one
>     know that there is nothing more to be gleaned?


--
↙↙↙ uǝlƃ
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 


More information about the Friam mailing list