<div dir="auto">Above I should have written "both/and is better than either/or". For clarity.<div dir="auto"><br></div><div dir="auto">Marc Raibert founded Boston Dynamics which was bought by Google. They're the people that develop the walking animals, etc that appear in so many videos.</div><div dir="auto"><br></div><div dir="auto">Marc and I did an experiment that involved solving differential equations (first principles) offline and storing the results in very large tables. In real time the walking machine fits curves (not first principles) to the tables to determine how to move a joint to achieve balance.<br><br>Is that an example of a synthesis?<br><div data-smartmail="gmail_signature" dir="auto">---<br>Frank C. Wimberly<br>140 Calle Ojo Feliz, <br>Santa Fe, NM 87505<br><br>505 670-9918<br>Santa Fe, NM</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 14, 2020, 9:08 AM Marcus Daniels <<a href="mailto:marcus@snoutfarm.com">marcus@snoutfarm.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-US" link="blue" vlink="purple">
<div class="m_-4905128591047526760WordSection1">
<p>Steve writes:<u></u><u></u></p>
<p>“I *think* this discussion (or this subthread) has devolved to suggesting that predictive power is the only use of modeling (and simulation) whilst explanatory power is not (it is just drama?). “
<u></u><u></u></p>
<p>First principles explanations start with some assumptions and reason forward. The explanation will be wrong if the assumptions are wrong. If the validation data is inadequate in depth or breadth, or at the wrong scale, the validation that is achieved
will be wrong or illusory too. In Nick’s example, the problem was that flight evidence was on the wrong scale. If the flight continued for 120 years, I’d argue that is a distinction without a difference. There won’t be a widow, because she’ll be dead
too.<u></u><u></u></p>
<p>I suspect a lot of the appeal of explanatory power does not come from the elaboration or analysis that derivations provide, but simply from a desire for control, and a desire to have something to talk about.<u></u><u></u></p>
<p>Some machine learning approaches give simple models, models that do not involve thousands of parameters. If one gets to the same equations from an automated process, nothing prevents derivations or deconstruction starting from them. Other machine learning
approaches generalize, but give black boxes that are inscrutably complex. When the latter is far more powerful than the former, what is one to do? Ignore their utility?<u></u><u></u></p>
<p>Marcus <u></u><u></u></p>
</div>
</div>
.-. .- -. -.. --- -- -..-. -.. --- - ... -..-. .- -. -.. -..-. -.. .- ... .... . ...<br>
FRIAM Applied Complexity Group listserv<br>
Zoom Fridays 9:30a-12p Mtn GMT-6 <a href="http://bit.ly/virtualfriam" rel="noreferrer noreferrer" target="_blank">bit.ly/virtualfriam</a><br>
unsubscribe <a href="http://redfish.com/mailman/listinfo/friam_redfish.com" rel="noreferrer noreferrer" target="_blank">http://redfish.com/mailman/listinfo/friam_redfish.com</a><br>
archives: <a href="http://friam.471366.n2.nabble.com/" rel="noreferrer noreferrer" target="_blank">http://friam.471366.n2.nabble.com/</a><br>
FRIAM-COMIC <a href="http://friam-comic.blogspot.com/" rel="noreferrer noreferrer" target="_blank">http://friam-comic.blogspot.com/</a> <br>
</blockquote></div>