[FRIAM] At the limits of thought

uǝlƃ ☣ gepropella at gmail.com
Wed Apr 22 17:17:44 EDT 2020


I suppose it's delusional synergy that I saw Krakauer's essay the same (sleepless) morning I saw this:

  Experience Grounds Language
  https://arxiv.org/abs/2004.10151

And since all the news all the time is about the parasite, I can't help but think Krakauer's wrong in the main thread that understanding and prediction are distinct. In Bisk et al above, the current machine learning algorithms are parasitic. Their predictions are akin to state space reconstruction algorithms that posit some deep structure that's expressive enough to mimic our linguistic output, but that's very different from our (internal) state machines. (And to be clear, our internal state machines are just as opaque as those of the machines. That we think our state machines are "understanding" whereas the machines' state machines are opaque, however predictive, is illusory ... or perhaps anthropocentric.) And although I'd claim the machines, like SARS-CoV-2, *understand*, it's *what* they understand that differs, not *that* they don't understand.

The machines' algorithms are parasitic because they depend deeply on our state machines' output (WS1 and WS2 in the Bisk paper). But as the machines' scopes grow (from disembodied binaries pushed by hardware clocks to fully parallel, sensorimotor manifolds in real or virtual space and time), the machines' understanding will be less opaque because it will be less parasitic and more autonomous ... in the same way we go "Awwww" when one of Karl Sims' virtual creatures walks across the virtual landscape.  They'll still be as opaque as, say, Nick's mind is to mine ... which is pretty damned opaque. But it'll be much easier for us to "see where they're coming from" because they, like us, will have grown up poking around in the world.

On 4/22/20 8:01 AM, uǝlƃ ☣ wrote:
> 
> https://aeon.co/essays/will-brains-or-algorithms-rule-the-kingdom-of-science
> 

-- 
☣ uǝlƃ



More information about the Friam mailing list