[FRIAM] nice quote
steve smith
sasmyth at swcp.com
Sun Oct 13 20:32:43 EDT 2024
On 10/9/24 12:33 PM, glen wrote:
> Hm. I don't normally read the GPT output posted to the list. But I did
> this time and am worse off for it. Your original is way better.
> Anyway, I'd like to continue responding to Stephen's originalism and
> your tangent into compression at the same time.
Because wrong or wrong-headed, or is that a false dichotomy?
>
> The idea that models are abstract is what I think Jon was targeting
> with the bisimulation reference. There's no requirement that models be
> less detailed than their referents. In fact, I'd argue that many
> models have more details than their referent (ignoring von Neumann's
> interpretation of Goedel for a minute). A mathematical model of, say,
> a toy airplane in a wind tunnel (especially such a model implemented
> in code - with the whole stack from programming language down to
> transistors) feels more packed with detail than the actual toy
> airplane in the wind tunnel. There's detail upon detail in that model.
> Were we to take assembly theory seriously, I think the implemented
> model is way more detailed (complicated) than the referent. What makes
> the model useful isn't the compression, but the determinism ... the
> rational/mental control over the mechanism. We have less control over
> the toy airplane and the wind than we have over the tech stack in
> which the model is implemented.
I'm not sure what it means in this context for the model to be more
detailed than the referent? The toy airplane in a wind tunnel is likely
less detailed (or differently detailed?) than a real airplane in a real
airflow but if the (computer?) model of such is *more detailed* then I
think this invokes your "excess meaning" or "excess detail?" dismissal?
if the computer model/simulation has more detail than the
toy-airplane/wind-tunnel model of the fully elaborated "real" airplane
in "real airflow" then does it have *more* than the latter? If so, not
only excess but also "wrong"? If more detailed than the toy/wind-tunnel
then simply closer to referent?
>
> Here's where aphorisms and "models" writ large differ drastically.
> Aphorisms are purposely designed to have Barnum-Forer (heuristic)
> power. Models often have that (especially in video games, movies,
> etc.) power.
I appreciate this distinction/acknowledgment.
> But Good Faith modelers work against that. A Good Faith modeler will
> pepper their models' uses with screaming large print IT'S JUST A MODEL.
I agree there is virtue in acknowledging the implications of "IT"S JUST
A MODEL!!!!"
> Aphorisms (and all psuedo-profound bullshit) are not used that way.
And are all aphorisms by definition "pseudo-profound bullshit"? Or do
they still retain some profoundish-utility? do they in any way represent
a (finessed?) useful compression?
> They are used in much the same way *inductive* models are used to
> trick you into thinking the inference from your big data is
> categorically credible.
> I agree with Stephen that Box is mostly referring to inductive
> inference. But then again, with the demonstrative power of really
> large inductively tuned models, we're starting to blur the lines
> between induction, deduction, and abduction. That false trichotomy was
> profound at some point. But these days, sticking to it like Gospel is
> problematic.
I accept de-rigeur that "sticking to anything like Gospel" is
problematic. I'm a little slow at the switch here on the earlier part
of the paragraph.. I will study it. It reads at least mildly profound
and I trust not "pseudo-so".
>
> On 10/9/24 10:49, steve smith wrote:
>> Now the original for the 0 or 2 people who might have endured this far:
>>
>> The first clause (protasis?) seems to specifically invoke the
>> "dimension reduction" implications of "compression" but some of the
>> recent discussion here seems to invoke the "discretization" or more
>> aptly perhaps the "limited precision"? I think the stuff about
>> bisimulation is based on this difference?
>>
>> The trigger for this flurry of "arguing about words" was Wilson's:
>>
>> "We have Paleolithic emotions, medieval institutions, and
>> god-like technology."
>>
>> to which there were varioius objections ranging from (paraphrasing):
>>
>> "it is just wrong"
>>
>> "this has been debunked"
>>
>> to the ad-hominem:
>>
>> "Wilson was once good at X but he should not be listened to
>> for Y"
>>
>> The general uproar *against* this specific aphorism seemed to be
>> a proxy for:
>>
>> "it is wrong-headed" and "aphorisms are wrong-headed" ?
>>
>> then Glen's objection (meat on the bones of "aphorisms are
>> wrong-headed"?) that aphorisms are "too short" which is what lead me
>> to thinking about aphorisms as models, models as a form or expression
>> of compression and the types of compression (lossy/not) and how that
>> might reflect the "bisimulation" concept
>> https://en.wikipedia.org/wiki/Bisimulation . At first I had the
>> "gotcha" or "aha" response to learning more about bisimulation that
>> it applied exclusively/implicitly to finite-state systems but in fact
>> it seems that as long as there is an abstraction that obscures or
>> avoids any "precision" issues it applies to all state-transition
>> systems.
>>
>> This lead me to think about the two types of compression that
>> models (or aphorisms?) offer. One breakdown of the features of
>> compression in modeling are: Abstraction; Dimension Reduction; Loss
>> of Detail; Pattern Recognition. The first and last (abstraction
>> and pattern recognition) seem to be features/goals of modeling, The
>> middle two seem to be utilitarian while the loss of detail is more of
>> a bug, an inconvenience nobody values (beyond the utility of keeping
>> the model small and in the way it facilitates "pattern recognition"
>> in a ?perverse? way)
>>
More information about the Friam
mailing list