[FRIAM] maximally-stateful versus purely-functional: some thoughts on diffusion-limited aggregration

Roger Frye frye.roger at gmail.com
Wed Sep 9 11:24:30 EDT 2020


Jon,
Thanks for the clarifications on your current focus.

Problems where the process is at least as important as the result vary
greatly, so multiple types of implementation might apply.

I think of Eugenio Moggi's monads as a way to cheat in functional
programming by encapsulating extra state. Same with monads in category
theory augmenting functors with data type. Same with posets as a way of
associating structure onto undifferentiated sets. Same with closures in
Lisp to bind the environment with a supposedly pure lambda function in
order to have different behaviors.

Memorizing and other forms of caching also bind external data, here with
the specific goal of skipping repeated calculations or data transfers.

I have used Petri nets to optimize multiple resources in a system of
elevators and to assign the parallel use of components of a CDC 1604
computer. PROLOG is often applied to conundrum type problems. Same with
Bayesian networks. In all these cases, precisely known logic rules drive
the search for an optimal solution.

Another approach in algorithmically intractable situations especially where
the constraints are probability distributions is to search for hidden
parameters given external structure with Markov Chain Monte Carlo
simulation. Reinforcement learning is another example of searching a large
process space.

I lean toward distributed computations like swarms of evolving agents as a
way of incorporating state and path dependence. I like the way random
connections in reservoir computing mechanisms like liquid state machines
and echo state networks can navigate chaotic processes and can be quickly
retrained for different constraints.

Finite element analysis could be adapted to solve diffusion limited
aggregation or related problems. The "grid" would adapt during the
simulation to work at the frontier of the bound and free states. This could
apply more generally to Ising models and cellular automata where the
computational focus is local and usually sparse.

I say finite element "grid" because that has been the traditional way to
cover large computational surfaces as in vector computing or GPUs or Tensor
Processing Units, but more appropriate structures for local activity would
be ragged arrays or other types of sparse structures implementing
quasi-crystaline patterns or random graphs.

Quantum simulation of molecular networks seems like the ultimate way to
distribute computation and state with infinite numbers of paths.


On Tue, Sep 8, 2020, 8:57 PM Jon Zingale <jonzingale at gmail.com> wrote:

> RogerF,
>
> What I am calling *maximally-stateful* is a limiting notion opposed to
> that of *purely functional*. Effectively, a *maximally stateful* process
> is one whose side-effects dominate the computation, with little else
> analytically calculable. At present, I am thinking of things like clouds,
> cellular automata, or the geological history of a mountain. I am thinking
> about computations where: the term *process* is more apt than *function*,
> computations whose specification may be trivial and yet depend crucially
> on their initial conditions, the evolving *process* manifests in bouts of
> combinatorial explosions, the *intension* matters more than the
> *extension*,
> and any particular run is unambiguously path-dependent.
>
> Some time ago, on the list, Glen mentioned diffusion-limited aggregation
> (DLA) in the context of our on-going discussions centered around modal
> logic, determinism, and epiphenomena. Glen suggested DLAs as an example
> of where the conceptual cleanliness of algebraic thinking, categorical
> thought, and *purely functional* programming may be disadvantageous to the
> reasoner. Part of his criticism focuses on insufficient characterization
> by way of Poset-centric descriptions. Inspired by Glen's skepticism, I
> set out to explore what forms *purely functional* diffusion-limited
> aggregation may take. To a first approximation, the computation may
> be understood as a function taking a pair of lists to a pair of lists,
>
> DLA1 :: ([Free], [Bound]) -> ([Free], [Bound]),
> where Free and Bound are type synonymous with a general particle type.
>
> Initially, the collection [Bound] consists of stationary germs and the
> collection [Free] consists of random walking particles. As freely moving
> particles enter the neighborhoods of bound particles, the free particles
> themselves become bound. Functionally, we can interpret the function as a
> process taking elements from the left list to elements of the right list.
> However, if this were all the was to say about the problem then we could
> be done with it and simply write:
>
> DLA1 (fs, bs) = ([], fs ++ bs)
>
> However, this isn't the case. We want more from a DLA, namely its state.
> We are not just concerned with the fact *that* free particles become
> bound,
> but with *how* these particles become bound, the *intensional* content of
> the
> computation, its stateful nature. Further, we may concern ourselves with
> sensitivity to path-dependence, the manifested form's *being* and
> *becoming*
> as a historical fact.
>
> To address these questions consider a dual interpretation. Rather than
> considering random walks on free particles, we can assign probability
> distributions to the bound particles (the surface-bound particles having
> non-zero probability). At every time step, each cell has some probability
> of *capturing* a free particle, and the collection of bound particles
> grows in this way[*]. As the surface changes, the distributions over the
> entire surface are recalculated, and from this perspective, it is clear
> that the combinatorial species determines future probabilities. To my
> mind, this treatment begs the question, "What might be gained from
> structuring the space of all possible bound states as a Poset"? In this
> interpretation, any DLA can be given *extensionally* as a path (really a
> lattice of 'evolutions satisfying boundary conditions') from some germ
> state to some final state.
>
> From a *purely functional* perspective, it is natural (since Moggi) to
> structure stateful computations in terms of monads and free algebras[◻].
> There are a number of ways that monadic interpretations may arise in
> this context: The State monad to accumulate data separate from function,
> the Giry monad to structure a Markov transition system, or perhaps a
> monadic pointed space to structure the encapsulation of bound particles.
> While the first two are likely essentially-necessary, the last points to
> tempting but possibly exogenous interpretation. How does one know when
> they have accurately characterized the content of a theory? How does one
> know that there is nothing more to be gleaned?
>
> Jon
>
> [*] It is worth noting that to reach an analytically amenable form, I
> was initially tempted to replace probability distributions by growth
> rates over bound particles. Unfortunately, models of this type fail to
> exhibit the symmetry breaking characteristic of DLAs. For example,
> consider a computation beginning with a single germ, each side of which
> is given the same probability and thus the same growth rate.
>
> [◻] The relationship between the usage of monads in computer science
> (for structuring programs or handling side-effects) and the usage of
> monads in mathematics (for founding algebras) remains mystifying to me.
> To my naive eye, it appears as one of those unlikely places where a thing
> and the language for talking about the thing identify, but who knows?
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200909/828a7ce4/attachment.html>


More information about the Friam mailing list