[FRIAM] new math of complexity

glen gepropella at gmail.com
Fri Jun 14 11:15:19 EDT 2024


Well, I agree that the implications of mapping coarser with finer grained ∈-machines belong in a church. But the actual mappings (and any attempts to characterize the expressive scope of those machines) are a kind of math inquiry, however obscure. What irritates me is that people shunt the concept of orders with that of levels. We don't need the concept of "leak" if we stick to the tried and true concept of higher order languages where the macros are mixed in with the primitives, not somehow independent of them.

For example, if we talk about a freezing event, where the "physics" after the event are constrained in such a way that we can ignore large swaths of possible events because they're vanishingly unlikely, then in a higher order language, those lower order operators are still available, just rarely/never executed. While it's true that we could instantiate a model that re-created the freeze from first principles, it's efficient to launch the system from (second?, derived?) principles instead and watch it play out from the freeze onward.

But the higher order language is *open* to thawing the macros, a devolution back to a dynamic dominated by the primitives. That won't happen in these (strictly) leveled, independent machines.

On 6/14/24 07:25, Marcus Daniels wrote:
> The double slit experiment demonstrates what appears to be nondeterminism, but that hasn't prevented development of an accurate model of the phenomena that deterministic computers can simulate.  I don't have to believe a deterministic interpretation of the double slit experiment, but Occam's Razor encourages me to.  (I can't control the initial conditions of the universe.)  What is the point of discussions about things that cannot be modeled?   These discussions belong in a church.  They are not inquiry.
> 
>> On Jun 14, 2024, at 6:20 AM, glen <gepropella at gmail.com> wrote:
>>
>> But the trouble is that controlled experiments are our gold standard for testing such. Control is the default. It seems like at least confirmation bias. Of course control demonstrates determinism. It's petitio principii. In order to demonstrate a counter exmaple, we have to control everything we could possibly *ever* control, being left with only that we can't control ... like proving a negative.
>>
>> In that context, those of us who believe there exists some thing we can't control act a bit like theists. Whenever they manage to concretely define the process they claim is uncontrollable, we demonstrate it's controllability. Then they move the goalposts and we start all over again. It's tiresome and even if we want to be charitable, allowing that maybe there's something uncontrollable out there (or there is something we might call God), at every turn, as soon as it's defined concretely, it's eventually falsified. That leads some of us to tire out, give up, and just flip the faith and assume there is no uncontrollable thing.
>>
>>> On 6/13/24 19:13, Marcus Daniels wrote:
>>>     What’s odd is this idea there is something about nature that can’t be described in a repeatable way, such that a digital computer could simulate it, in principle.    Paradoxically, to defend that idea, one would have to describe an experiment that could illustrate counter examples -- concepts that could not be said.   It is obfuscation by construction.


-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list