[FRIAM] New ways of understanding the world

Marcus Daniels marcus at snoutfarm.com
Tue Dec 1 12:00:13 EST 2020


You seem to be implying that humans are somehow different than machines.    That they have something like Chomsky's language acquisition device which is novel in ways that humans don't understand well enough to implement.    My dog learns all sorts of conditional probabilities.    For example, she knows she can paw on the garage door in the evening and find me on that conveyor belt machine thing.   She knows or at least reacts to a correlation between me grabbing my wallet and driving to the dog park.  She knows that food is available immediately after that trip.  These networks of relations are the sort of structures that were learned in my copy deprotection example.   Just deeper networks with somewhat more precise perceptual cues.   I'm pretty sure my dog has no time or interest in theory.   There are balls to chase, and delivery people to scare off.    I would even say my dog performs experiments when she slams a toy down in front of me to see if it is a good time to play.

-----Original Message-----
From: Friam <friam-bounces at redfish.com> On Behalf Of u?l? ???
Sent: Tuesday, December 1, 2020 7:22 AM
To: friam at redfish.com
Subject: Re: [FRIAM] New ways of understanding the world

I'm too ignorant to say anything useful about description vs. theory. All I was talking about is whether one can build a machine that discovers patterns without a theory. And my answer is No. But my answer depends on the "minimal" qualifier. A theory, in this sense, is simply a collection of theorems, provable sentences in a given language. (It seems like a natural extension to include *candidate theorems* -- hypotheticals -- that may or may not be provable, which may match a more vernacular conception of the word "theory".)

And to go back to Jochen's 2nd post <http://friam.471366.n2.nabble.com/New-ways-of-understanding-the-world-tp7599664p7599668.html>, it seems to me like a machine capable of discovering a theory of everything would *need* a prior language+axioms capable of expressing everything that physics (and biology, etc.) can express. And that implies a higher order language, a language of languages [⛧]. The "try random stuff and see what works" *fits* that meta-structure. A machine capable of shotgunning a huge number of subsequent languages *from* a prior language of languages, could stumble upon (or search for) a language that works.

Such meta languages are *schematic*, however. So when people like Tegmark assert that the universe *is* math, there's ambiguity in the word "math" that some people in the audience might miss, much like the ambiguity in the word "logic" that Nick often glosses over. Which math? Which of the many types of math best matches physics? Is it the same type of math that best matches biology? Psychology? Etc.

Of course, if we go back to Soare's definition of "computation" and require it to be _definit_, then it's not clear to me such a schematic AI, pre-programmed with a language of languages, *could* be constructed. But if we relax that requirement, then it seems reasonable.


[⛧] But a language of languages is still a language. Similarly, a theory of theories is still a theory, which is why even such a schematic AI would *still* require a prior theory.

On 12/1/20 6:17 AM, Marcus Daniels wrote:
> It seems to me the taxa of life are a description not a theory.

--
↙↙↙ uǝlƃ

- .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
archives: http://friam.471366.n2.nabble.com/
FRIAM-COMIC http://friam-comic.blogspot.com/ 


More information about the Friam mailing list