[FRIAM] nice quote
glen
gepropella at gmail.com
Mon Oct 14 12:34:20 EDT 2024
The GPT 4o summarizations left less room for challenging the relationship between aphorisms and models from the very start. It's not the LLM's fault. But the summaries *shut down* the discussion as opposed to opening it up. So neither wrong, nor wrong-headed, whatever those may mean. I'd say "closed" as opposed to "open".
To be clear, I'm confident you (and everyone on this list) know(s) everything I'm about to write. But my duty to modeling and simulation triggers me to avoid preemptive registration, something hucksters rely upon in harvesting from their victims.
Differently detailed - hits the nail on the head. "All models are wrong" doesn't require compression, only the recognition that models are (somehow) different from their referents. This is even true of models where every possible behavior (measurable state) is matched with (1-1 and onto) those of the referent. There is a trivial (or degenerate?) model where neither the model nor the referent have any hidden behavior (latent states) not present in the other. Here, the model would not be "wrong" at all. The map is the territory. I'd claim that even that model would be useful. It can be good to have 2 of a thing instead of just 1 of a thing ... like experiments on genetic twins.
Excess meaning - I think all our practical/useful models are just as detailed as their referents. That a math model has seemingly unrelated detail (like banks of repeated transistor crystals) to the referent's details isn't excess meaning. It's the internal (unmapped to the referent) behavior the model must have to generate the external (mapped to the referent) behavior. My point was that quantifying the *detail* misses the point. The important part of modeling is the controllability of the model, not how persnickety the model is.
The universality of the hierarchically composed *computer* is what gives us that controllability. Such machines can model anything. But those machines aren't simple or abstract. They're the layered result of decades of engineering (and science), from the math all the way down to power management. E.g. the assembly index of your laptop is quite large (or it should be). It's clearly way on the side of "life-derived". The kicker comes when we consider the assembly index of, say, a model airplane in a wind tunnel. That's also clearly life-derived. The materials and engineering surrounding them matters. If it's made of wood, then it's "detail" might be considered fairly low. It's a naturally occurring material. But it is carved or fashioned. Same with the wind tunnel. Air is "natural". But maybe it's not ordinary air? And the engineering that goes into generating and focusing the wind matters. Were we modeling, say, radioactive decay, it would be reasonable to argue that the model is more detailed than the referent, more moving parts needed to controllably mimic a ubiquitous natural process.
When we talk of models being abstractions, what we really mean is not that they're abstract things. (Computers aren't abstract. They're extant, concrete things.) What we mean by "abstract", here is that these models have logical layers of control such that we *can* ignore the (unmapped) internal state of the model. It's universal in some sense. The particulars of *this* computer don't matter. Any other computer will suffice. (Caution: an unimplemented "model" is not a model.)
I'll go one assertion further. Mental models are also not abstract. The details of one's physiological and neurological machinery exist. Your mental model of, say, a rock, is just as detailed (if not more) than the rock. The difference is that your brain/body is more controllable/universal than the rock.
On 10/13/24 17:32, steve smith wrote:
>
> On 10/9/24 12:33 PM, glen wrote:
>> Hm. I don't normally read the GPT output posted to the list. But I did this time and am worse off for it. Your original is way better. Anyway, I'd like to continue responding to Stephen's originalism and your tangent into compression at the same time.
>
> Because wrong or wrong-headed, or is that a false dichotomy?
>
>>
>> The idea that models are abstract is what I think Jon was targeting with the bisimulation reference. There's no requirement that models be less detailed than their referents. In fact, I'd argue that many models have more details than their referent (ignoring von Neumann's interpretation of Goedel for a minute). A mathematical model of, say, a toy airplane in a wind tunnel (especially such a model implemented in code - with the whole stack from programming language down to transistors) feels more packed with detail than the actual toy airplane in the wind tunnel. There's detail upon detail in that model. Were we to take assembly theory seriously, I think the implemented model is way more detailed (complicated) than the referent. What makes the model useful isn't the compression, but the determinism ... the rational/mental control over the mechanism. We have less control over the toy airplane and the wind than we have over the tech stack in which the model is implemented.
> I'm not sure what it means in this context for the model to be more detailed than the referent? The toy airplane in a wind tunnel is likely less detailed (or differently detailed?) than a real airplane in a real airflow but if the (computer?) model of such is *more detailed* then I think this invokes your "excess meaning" or "excess detail?" dismissal? if the computer model/simulation has more detail than the toy-airplane/wind-tunnel model of the fully elaborated "real" airplane in "real airflow" then does it have *more* than the latter? If so, not only excess but also "wrong"? If more detailed than the toy/wind-tunnel then simply closer to referent?
>>
>> Here's where aphorisms and "models" writ large differ drastically. Aphorisms are purposely designed to have Barnum-Forer (heuristic) power. Models often have that (especially in video games, movies, etc.) power.
> I appreciate this distinction/acknowledgment.
>> But Good Faith modelers work against that. A Good Faith modeler will pepper their models' uses with screaming large print IT'S JUST A MODEL.
> I agree there is virtue in acknowledging the implications of "IT"S JUST A MODEL!!!!"
>> Aphorisms (and all psuedo-profound bullshit) are not used that way.
> And are all aphorisms by definition "pseudo-profound bullshit"? Or do they still retain some profoundish-utility? do they in any way represent a (finessed?) useful compression?
>> They are used in much the same way *inductive* models are used to trick you into thinking the inference from your big data is categorically credible.
>
>> I agree with Stephen that Box is mostly referring to inductive inference. But then again, with the demonstrative power of really large inductively tuned models, we're starting to blur the lines between induction, deduction, and abduction. That false trichotomy was profound at some point. But these days, sticking to it like Gospel is problematic.
> I accept de-rigeur that "sticking to anything like Gospel" is problematic. I'm a little slow at the switch here on the earlier part of the paragraph.. I will study it. It reads at least mildly profound and I trust not "pseudo-so".
>>
>> On 10/9/24 10:49, steve smith wrote:
>>> Now the original for the 0 or 2 people who might have endured this far:
>>>
>>> The first clause (protasis?) seems to specifically invoke the "dimension reduction" implications of "compression" but some of the recent discussion here seems to invoke the "discretization" or more aptly perhaps the "limited precision"? I think the stuff about bisimulation is based on this difference?
>>>
>>> The trigger for this flurry of "arguing about words" was Wilson's:
>>>
>>> "We have Paleolithic emotions, medieval institutions, and god-like technology."
>>>
>>> to which there were varioius objections ranging from (paraphrasing):
>>>
>>> "it is just wrong"
>>>
>>> "this has been debunked"
>>>
>>> to the ad-hominem:
>>>
>>> "Wilson was once good at X but he should not be listened to for Y"
>>>
>>> The general uproar *against* this specific aphorism seemed to be a proxy for:
>>>
>>> "it is wrong-headed" and "aphorisms are wrong-headed" ?
>>>
>>> then Glen's objection (meat on the bones of "aphorisms are wrong-headed"?) that aphorisms are "too short" which is what lead me to thinking about aphorisms as models, models as a form or expression of compression and the types of compression (lossy/not) and how that might reflect the "bisimulation" concept https://en.wikipedia.org/wiki/Bisimulation . At first I had the "gotcha" or "aha" response to learning more about bisimulation that it applied exclusively/implicitly to finite-state systems but in fact it seems that as long as there is an abstraction that obscures or avoids any "precision" issues it applies to all state-transition systems.
>>>
>>> This lead me to think about the two types of compression that models (or aphorisms?) offer. One breakdown of the features of compression in modeling are: Abstraction; Dimension Reduction; Loss of Detail; Pattern Recognition. The first and last (abstraction and pattern recognition) seem to be features/goals of modeling, The middle two seem to be utilitarian while the loss of detail is more of a bug, an inconvenience nobody values (beyond the utility of keeping the model small and in the way it facilitates "pattern recognition" in a ?perverse? way)
>>>
--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
More information about the Friam
mailing list