[FRIAM] nice quote

glen gepropella at gmail.com
Wed Oct 9 14:33:38 EDT 2024


Hm. I don't normally read the GPT output posted to the list. But I did this time and am worse off for it. Your original is way better. Anyway, I'd like to continue responding to Stephen's originalism and your tangent into compression at the same time.

The idea that models are abstract is what I think Jon was targeting with the bisimulation reference. There's no requirement that models be less detailed than their referents. In fact, I'd argue that many models have more details than their referent (ignoring von Neumann's interpretation of Goedel for a minute). A mathematical model of, say, a toy airplane in a wind tunnel (especially such a model implemented in code - with the whole stack from programming language down to transistors) feels more packed with detail than the actual toy airplane in the wind tunnel. There's detail upon detail in that model. Were we to take assembly theory seriously, I think the implemented model is way more detailed (complicated) than the referent. What makes the model useful isn't the compression, but the determinism ... the rational/mental control over the mechanism. We have less control over the toy airplane and the wind than we have over the tech stack in which the model is implemented.

Here's where aphorisms and "models" writ large differ drastically. Aphorisms are purposely designed to have Barnum-Forer (heuristic) power. Models often have that (especially in video games, movies, etc.) power. But Good Faith modelers work against that. A Good Faith modeler will pepper their models' uses with screaming large print IT'S JUST A MODEL. Aphorisms (and all psuedo-profound bullshit) are not used that way. They are used in much the same way *inductive* models are used to trick you into thinking the inference from your big data is categorically credible. I agree with Stephen that Box is mostly referring  to inductive inference. But then again, with the demonstrative power of really large inductively tuned models, we're starting to blur the lines between induction, deduction, and abduction. That false trichotomy was profound at some point. But these days, sticking to it like Gospel is problematic.

On 10/9/24 10:49, steve smith wrote:
> Now the original for the 0 or 2 people who might have endured this far:
> 
>     The first clause (protasis?) seems to specifically invoke the "dimension reduction" implications of "compression" but some of the recent discussion here seems to invoke the "discretization" or more aptly perhaps the "limited precision"?   I think the stuff about bisimulation is based on this difference?
> 
>     The trigger for this flurry of "arguing about words" was Wilson's:
> 
>         "We have Paleolithic emotions, medieval institutions, and god-like technology."
> 
>     to which there were varioius objections ranging from (paraphrasing):
> 
>         "it is just wrong"
> 
>         "this has been debunked"
> 
>     to the ad-hominem:
> 
>         "Wilson was once good at X but he should not be listened to for Y"
> 
>     The general uproar *against* this specific aphorism seemed to be a proxy for:
> 
>         "it is wrong-headed" and "aphorisms are wrong-headed" ?
> 
>     then Glen's objection (meat on the bones of "aphorisms are wrong-headed"?) that aphorisms are "too short" which is what lead me to thinking about aphorisms as models, models as a form or expression of compression and the types of compression (lossy/not) and how that might reflect the "bisimulation" concept https://en.wikipedia.org/wiki/Bisimulation .   At first I had the "gotcha" or "aha" response to learning more about bisimulation that it applied exclusively/implicitly to finite-state systems but in fact it seems that as long as there is an abstraction that obscures or avoids any "precision" issues it applies to all state-transition systems.
> 
>     This lead me to think about the two types of compression that models (or aphorisms?) offer.   One breakdown of the features of compression in modeling are: Abstraction; Dimension Reduction; Loss of Detail; Pattern Recognition.    The first and last (abstraction and pattern recognition) seem to be features/goals of modeling,  The middle two seem to be utilitarian while the loss of detail is more of a bug, an inconvenience nobody values (beyond the utility of keeping the model small and in the way it facilitates "pattern recognition" in a ?perverse? way)
> 
-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list