[FRIAM] How soon until AI takes over polling?

glen ep ropella gepr at tempusdictum.com
Wed Nov 11 11:53:18 EST 2020


I think I agree with the idea that *some of* our conceptual dynamics are already coupled with physical dynamics. However, I also think the recent discussion of the pyrrhonian problematic and vernacular conceptions of "mechanism" highlight where that's *not* the case.

There seems (to me) an inexorable trend toward explainable AI. The credibility of any conception (e.g. a simulation) hinges on being able to explain what it's doing. And it's not (quite) enough to bury it all in esoteric math. My own attempts to suss out the distinction are couched in terms of "relational grounding" and a form of "logical depth". The paper we published hasn't had much traction, though. The overwhelming majority of citations are from our own group. You can view xAI (or xML) as "top down" and solidly mechanistic approaches as "bottom up". But a network is a better way to think about it, where black box predictors (like ODE and stat models) are "thin" and mechanistic models are "thick". Deep learning is just a tad thicker. Mechanistic and physics-based machine learning is yet thicker.

The question is "what is the stuff that makes it thick" ... thick with what? My answer is model composition. And it's the composing operators that [dis]allow the "interreality", as well as the relationship between [white|black|grey] boxes.

One of the triggering assertions I use at simulation conferences is to claim that validation and verification are the exact same thing, because verification is simply the validation of one's conceptual model against one's computational model. It's somewhat hyperbolic because the practical methods differ. But making the point can open some hard-nosed engineering types to a little philosophical speculation.

On 11/11/20 7:08 AM, Steve Smith wrote:
> 
> 
>>
>> So one more thing goes into what is both a black box and a private rather than public box.  It will take over after the first few times it produces much more reliable results, but since we won’t know what it is based on — AIs don’t explain themselves — we will have no ability to extrapolate out of sample.
>>
>> Eric
> 
> And at what point does this kind of coupling yield a full up "inter-reality" in the Guintatas-Hubler sense? 
> 
>     https://journals.aps.org/pre/abstract/10.1103/PhysRevE.75.057201
> 
> My speculation is that we are already (way past) there, which is why the idea of "Russian Interference" in our election via social media feels so trite/mundane if simultaneously hugely threatening.


-- 
glen ep ropella 971-599-3737



More information about the Friam mailing list