[FRIAM] falsifying the lost opportunity updating mechanism for free will

Jon Zingale jonzingale at gmail.com
Tue Jun 23 21:30:18 EDT 2020


As I reflect, search, and read more on Markov processes I feel a need
to refine my earlier statements. The temptation is to short-circuit the
*history doesn't matter* quality of a Markov process[⚥]. Clearly, this
has it's own difficulties and misses Glen's *intent*, he is explicitly
desiring a non-Markovian model. It seems to me that being non-Markov
is baked into the underlying logic and manifests in concepts like
observability, uncertainty, etc...

To my mind, there is a possible correspondence between compressibility
as found in Chaitin-Kolmogorov complexity and the property of being
Markov. We determine the degree of randomness in a process by measuring
the distance a process is from being Markov (χ²). We determine the
degree of randomness in a sequence by measuring the distance a sequence
is from random (compressibility). In both cases, we rely on a notion of
observability, but perhaps in different ways.

For instance, in computational theory, recognition of a language by a
machine offers a means of computing statements in the language. Machines
are ordered by their capacity, and eventually, even Turing machines are
limited to what they *can know*. In the section of Gisin's paper entitled
"*Non-deterministic Classical Physics*", Gisin relies on a result from
symbolic dynamics that I am continuing to work through. Effectively,
the result can be summarized as saying that limited observability of
chaotic dynamics *entails* randomness[‡].

I feel a need to be cautious when asked whether a river delta is Markov.
I believe that it is worth arguing that the river delta itself is possibly
inaccessible and to decide whether it is Markov, we first need to fix a
model. If we bake determinism into the model, we will get determinism
from the model. On the other hand, we know that modeling systems of
incomplete information can be strikingly useful. By analogy, while the
gambler's ruin may have a positive *expectation*, the gambler may have
a finite budget.

So, back to the case of our river delta. If we want a non-Markov river
delta model, we can look to ways of limiting what can be known. There
can be limited knowledge in space, time, or process (say). For the
purpose of fleshing out these constraints, consider an analogy to Conway's
game. The whole river (the board-state) may be reasonably modeled such
that the n+1th state is determined by the nth state. Because a river
delta participates in *distributary formation* the underlying state space
is compelled to evolve. This evolution of the delta will depend on some
knowledge of the state of the ocean. Extending the model to include the
ocean may very well require revisiting the objective meaning of river
delta (relaxing spatial constraint).

Even while board-state is Markov, understanding the evolution of a given
neighborhood with imperfect information may not be Markov (Observing
the Mississipi river from Vicksburg, say). Determination of the future
state of this stretch of river will depend not only on history but on the
state of the flow/channel further upstream. In Conway's game, we may
wonder about the appearance of a *glider* within a specific region. Without
knowledge of adjacent neighborhoods and because of the non-uniqueness
of glider formation, we may see our glider at time k but know nothing
about future states of our neighborhood (time and space matter).

Lastly, and likely most obviously/controversially, when we go to
summarize our river delta we make a choice of model. Perhaps our river
delta is not obeying Conway's rules for all time. Perhaps it is a very
nice approximation for now, but will be horribly divergent in the long
run. Worst will be if the process giving rise to our river delta belongs
to the class of *non-computables*.

[⚥]van Kampen says: "But suppose the walker has a tendency to persist
in his direction: probability p to step in the same direction, and q to
return. Then X_t is no longer Markovian since the probability of X_t
depends not just on x_t1 but also on x_t2. This may be remedied by
introducing the two-component variable {X_t, X_t-1}. This joint variable
is again Markovian, with transition probability..."
- Remarks on Non-Markov Processes
<http://www.sbfisica.org.br/bjp/files/v28_90.pdf>

[‡] The clearest source I have for this at present is article 5, section 3
of Conceptual Mathematics by Lawvere and Schanuel. If anyone on the
list has further expertise or reference for this concept, your input will
be appreciated.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20200623/a055f71c/attachment.html>


More information about the Friam mailing list