<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 10/9/24 9:15 AM, glen wrote:<br>
</div>
<blockquote type="cite"
cite="mid:70ee6031-f85c-4212-baee-9377e670af0f@gmail.com">Uh oh.
Originalism rears its head again! Are our aphorisms alive or dead?
<br>
</blockquote>
<p><br>
</p>
<p>I have been thinking about "compression" in the sense I think you
(Glen) use it often.</p>
<blockquote>
<p>"All models are wrong, some are useful" <br>
</p>
</blockquote>
<p>GPT 4o offers the following "compression" of my long-winded
response here: <br>
</p>
<blockquote>
<p>The (following) discussion argues that <strong>aphorisms</strong>
are like <strong>compressed models</strong> of reality,
offering simplicity through <strong>abstraction</strong> and <strong>pattern
recognition</strong>, but sacrificing <strong>precision</strong>
and <strong>detail</strong> in the process. <strong>Bisimulation</strong>
shows how this simplification applies broadly to systems, not
just finite ones. The trade-off in models (and aphorisms) is
that <strong>loss of detail</strong> is a drawback, but utility
remains through the patterns they reveal.</p>
</blockquote>
<p>or not much shorter but perhaps more succinct:</p>
<blockquote>
<p>The discussion compresses several ideas around <strong>aphorisms</strong>
and <strong>models</strong>, particularly focusing on how they
function as <strong>compressed representations</strong> of
reality.</p>
<ol>
<li>
<p><strong>Aphorisms as Models</strong>: The phrase "All
models are wrong, some are useful" is seen as a compressed
model of a larger truth. The <strong>first part</strong>
("All models are wrong") reflects the idea of <strong>dimension
reduction</strong> or <strong>loss of detail</strong> in
models—when reality is simplified, some accuracy is
sacrificed. The <strong>second part</strong> ("some are
useful") points to the utility of such models despite their
imperfections, a form of <strong>discretization</strong> or
<strong>limited precision</strong> where details are lost,
but patterns remain recognizable.</p>
</li>
<li>
<p><strong>Bisimulation</strong>: This concept, initially
thought to apply only to <strong>finite-state systems</strong>,
actually applies to <strong>all state-transition systems</strong>
as long as they adhere to an abstraction that avoids issues
of precision. This ties into the idea that models, like
aphorisms, rely on abstraction to remain useful despite
their inherent loss of detail.</p>
</li>
<li>
<p><strong>Compression in Modeling</strong>: The discussion
suggests two types of compression in models:</p>
<ul>
<li><strong>Abstraction and Pattern Recognition</strong>:
These are seen as <strong>features</strong> or <strong>goals</strong>
of modeling, focusing on extracting useful insights.</li>
<li><strong>Dimension Reduction and Loss of Detail</strong>:
These are <strong>utilitarian</strong>, needed for
practical reasons, but the loss of detail is viewed more
as an <strong>inconvenience</strong> than a valued trait.</li>
</ul>
</li>
</ol>
<p>The conversation touches on how these compressed forms of
knowledge (models and aphorisms) help with <strong>pattern
recognition</strong> and <strong>simplification</strong>,
even though they inevitably sacrifice some accuracy, which can
be seen as a flaw but is sometimes necessary for clarity and
utility.</p>
</blockquote>
<p>Now the original for the 0 or 2 people who might have endured
this far:<br>
</p>
<blockquote>
<p>The first clause (protasis?) seems to specifically invoke the
"dimension reduction" implications of "compression" but some of
the recent discussion here seems to invoke the "discretization"
or more aptly perhaps the "limited precision"? I think the
stuff about bisimulation is based on this difference?</p>
<p>The trigger for this flurry of "arguing about words" was
Wilson's:</p>
<blockquote>
<p>"We have Paleolithic emotions, medieval institutions, and
god-like technology."</p>
</blockquote>
<p>to which there were varioius objections ranging from
(paraphrasing):</p>
<blockquote>
<p>"it is just wrong"</p>
</blockquote>
<blockquote>
<p>"this has been debunked"</p>
</blockquote>
<p>to the ad-hominem:</p>
<blockquote>
<p>"Wilson was once good at X but he should not be listened to
for Y"</p>
</blockquote>
<p>The general uproar *against* this specific aphorism seemed to
be a proxy for:</p>
<blockquote>
<p>"it is wrong-headed" and "aphorisms are wrong-headed" ?<br>
</p>
</blockquote>
<p>then Glen's objection (meat on the bones of "aphorisms are
wrong-headed"?) that aphorisms are "too short" which is what
lead me to thinking about aphorisms as models, models as a form
or expression of compression and the types of compression
(lossy/not) and how that might reflect the "bisimulation"
concept <a moz-do-not-send="true"
href="https://en.wikipedia.org/wiki/Bisimulation"
class="moz-txt-link-freetext">https://en.wikipedia.org/wiki/Bisimulation</a>
. At first I had the "gotcha" or "aha" response to learning
more about bisimulation that it applied exclusively/implicitly
to finite-state systems but in fact it seems that as long as
there is an abstraction that obscures or avoids any "precision"
issues it applies to all state-transition systems. <br>
</p>
<p>This lead me to think about the two types of compression that
models (or aphorisms?) offer. One breakdown of the features of
compression in modeling are: Abstraction; Dimension Reduction;
Loss of Detail; Pattern Recognition. The first and last
(abstraction and pattern recognition) seem to be features/goals
of modeling, The middle two seem to be utilitarian while the
loss of detail is more of a bug, an inconvenience nobody values
(beyond the utility of keeping the model small and in the way it
facilitates "pattern recognition" in a ?perverse? way)<br>
</p>
</blockquote>
<p><br>
</p>
</body>
</html>