[FRIAM] AI perception

David Eric Smith desmith at santafe.edu
Mon Apr 17 18:12:13 EDT 2023


So we’ve always known (or, in the modern, post-philosopher-driven era) that human perception is an active process, interrogating the world with pre-registering and presumptive frameworks to host “experience”, which are then activated by whatever is “out there” providing stimuli.  (In fairness, Kant was okay on this point already, though giving Kant credit for having seen modern insights is often somewhat like giving newspaper psychics credit for seeing those futures: it involves a bit of projection.  A UT prof for whom I used to work claimed that giving Kant credit for conceptualizing galaxies is such a projection; his philosophical reasoning was some bizarre metaphoric thing that has essentially no overlap with the modern physical system conceiving of galaxies.  So caveat emptor etc.)

And now this:
https://physics.aps.org/articles/v16/63?utm_campaign=weekly&utm_medium=email&utm_source=emailalert
(Commentary, IMO very lovely.)

Much of the BH observation wasn’t really to test general relativity any more, as its parameters are heavily enough constrained that there isn’t much wiggle-room for it to differ in the regime where these large, low-curvature BHs exist (and down to considerably smaller ones than that).  Really what the observations are about is testing astrophysics, using GR as the registering framework.

In a sense, though, the AIoSphere now has a perception phenomenon that no person within it has.  It can “see” BHs that are not, case by case, producing the data “seen”.  The usual scientist’s impulse is to panic, that we are losing control of the validation of things because we now mix too much pre-registration with “the data”.  But of course in a Bayesian-updating view of science, where the integration of everything ever measured is also part of the data, not just the proximal input, it is no more odd that GR constrained by gravitational-wave ringdown and other features should be a prior constraint on the telescope images, than that people should see the world with eyes filtered through accreting billions of years of biological evolution.  Particle physics has been finding rare events in accelerator snow for decades using pre-registering models; without them detection would be strictly impossible.

The upshot, though: as the whole human-social-cultural-scientific community, we are building up increasingly thick and autonomous layers of “seeing”, and AI pre-processing is going to rapidly and probably chaotically add to that.

What do we “think" about this?  Is there anything that needs to be thought about it?  Or is it an extension of what we already think is basically sound, and just needs to be embedded in new layers of feedback controllers, to function properly?

Eric


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230418/403cdd1e/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: apple-touch-icon-180x180.png
Type: image/png
Size: 18400 bytes
Desc: not available
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230418/403cdd1e/attachment.png>


More information about the Friam mailing list