<div dir="auto">I haven't tried to prove it but I suspect those two definitions are equivalent.<div dir="auto"><br></div><div dir="auto">In our statistical causal reasoning work we used the causal Markov condition:</div><div dir="auto"><br></div><div dir="auto"><span style="color:rgb(32,33,34);font-family:-apple-system,blinkmacsystemfont,"segoe ui",roboto,lato,helvetica,arial,sans-serif;font-size:16px;background-color:rgb(255,255,255)">every node in a </span><a href="https://en.m.wikipedia.org/wiki/Bayesian_network" style="margin:0px;padding:0px;border:0px;line-height:inherit;font-family:-apple-system,blinkmacsystemfont,"segoe ui",roboto,lato,helvetica,arial,sans-serif;font-size:16px;vertical-align:baseline;background:none rgb(255,255,255);color:rgb(107,75,161);text-decoration-line:none">Bayesian network</a><span style="color:rgb(32,33,34);font-family:-apple-system,blinkmacsystemfont,"segoe ui",roboto,lato,helvetica,arial,sans-serif;font-size:16px;background-color:rgb(255,255,255)"> is </span><a href="https://en.m.wikipedia.org/wiki/Conditionally_independent" style="margin:0px;padding:0px;border:0px;line-height:inherit;font-family:-apple-system,blinkmacsystemfont,"segoe ui",roboto,lato,helvetica,arial,sans-serif;font-size:16px;vertical-align:baseline;background:none rgb(255,255,255);color:rgb(107,75,161);text-decoration-line:none">conditionally independent</a><span style="color:rgb(32,33,34);font-family:-apple-system,blinkmacsystemfont,"segoe ui",roboto,lato,helvetica,arial,sans-serif;font-size:16px;background-color:rgb(255,255,255)"> of its non-descendents, given its parents.</span><br></div><div dir="auto"><br></div><div dir="auto">This was an essential assumption in our causal reasoning algorithms. Famous philosopher Nancy Cartwright denies that it is universally true. But according to my equally esteemed boss, she is wrong.</div><div dir="auto"><br></div><div dir="auto">Frank<br><div data-smartmail="gmail_signature" dir="auto">---<br>Frank C. Wimberly<br>140 Calle Ojo Feliz, <br>Santa Fe, NM 87505<br><br>505 670-9918<br>Santa Fe, NM</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 6, 2021, 10:40 AM uǝlƃ ↙↙↙ <<a href="mailto:gepropella@gmail.com">gepropella@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Back when we were talking about the adequacy of deposition systems as analog to the lost opportunity updating mechanism for free will, I cross-checked definitions for the Markov property in 2 of my (old) control textbooks. Here they are:<br>
<br>
from [Estimation Theory and Applications, NE Nahi]<br>
> "A stochastic sequence x(k) is Markov of first order, or simply Markov, if for every i the following relationship holds<br>
> <br>
> p{x(i)|x(i–1),…, x(1)} = p{x(i)|x(i–1)} (1.189)<br>
> <br>
> In other words, the conditional probability density function of x(i) conditioned on all its past (given) values is the same as using the value in the Equation (1.189) is to be satisfied for all i." <br>
from [Applied Optimal Control, Bryson & Ho]<br>
> "A random sequence, x(k), k=0,1,…,N is said to be markovian if<br>
> <br>
> p[x(k+1)/x(k),x(k–1),… ,x(0)] = p[x(k+1)/x(k)] (11.1.1)<br>
> <br>
> for all k; that is, the probability density function of x(k+1) depends only on knowledge of x(k) and not on x(k-l), l=1,2,…. The knowledge of x(k) that is required can be either deterministic [exact value of x(k) known] or probabilistic (p[x(k)] known). In words, the markov property implies that a knowledge of the present separates the past and the future." <br>
<br>
I think the difference is interesting, particularly the "in other words" and "in words" parts. The choice of "i-1" vs. "k+1" is also interesting, but much less than "[future] conditioned on its past" versus "separates past and future". If we read like a modernist, through the presentations to some Platonic object behind them, we get the same damned thing, as transformed through fairly standard transforms (from what you read to what you think). But if we read it as a postmodernist, we can ask *why* Nahi chose i and i-1 where B&H chose k+1,k,k-1? And how *might* that choice be related to the more nuanced phrase "separates past from future"? And there are other differences, like Nahi's choice of "Markov of first order, or simply Markov" vs. B&H taking license to avoid allusion to higher order memory.<br>
<br>
A skeptical reader simply has to ask what does this *actually* mean? How are these authors intending to use and reuse this definition later? How will it compose with other concepts? Etc. Maybe it's all accidental and merely a function of the authoring/editing processes in each case. Or maybe not.<br>
<br>
<br>
On 1/5/21 2:28 PM, uǝlƃ ↙↙↙ wrote:<br>
> ... my contrarian nature (and my laziness) forces me to cross-check a proposition from one narrative to another, <br>
<br>
-- <br>
↙↙↙ uǝlƃ<br>
<br>
- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .<br>
FRIAM Applied Complexity Group listserv<br>
Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam" rel="noreferrer noreferrer" target="_blank">bit.ly/virtualfriam</a><br>
un/subscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
archives: <a href="http://friam.471366.n2.nabble.com/FRIAM-COMIC" rel="noreferrer noreferrer" target="_blank">http://friam.471366.n2.nabble.com/<br>
FRIAM-COMIC</a> <a href="http://friam-comic.blogspot.com/" rel="noreferrer noreferrer" target="_blank">http://friam-comic.blogspot.com/</a> <br>
</blockquote></div>