[FRIAM] Interview with Jeremy Howard

uǝlƃ ↙↙↙ gepropella at gmail.com
Wed Feb 24 19:20:46 EST 2021


IDK. I'm a fan of flipping things. And when talking about explainability, especially when we only have the sui generis architecture as an existence proof (the brain-body => general intelligence), I'd suggest the *explanations* are fictions and the machine, the network, is real. So, what we're calling "explainability" is actually "fictionalizability". In that flipped conception, our task is not to fictionalize what the network is doing. It's to de-fictionalize what we think or expect.

Or, i.e., the network is doing the real stuff. Our "explanations" of what the network is doing are nonsense.

On 2/24/21 10:39 AM, Jochen Fromm wrote:
> Deep learning mostly seems to be the good old back-propagation in feedforward neural networks which is rediscovered every 10 years by a new generation. Plus more data and more servers. The result is reasonable pattern recognition which lacks explainability. As Noah Smith said
> 
> Deep learning is basically just a computer saying "I can't quite define it, but I know it when I see it."
> https://twitter.com/Noahpinion/status/1361752362969272321

-- 
↙↙↙ uǝlƃ



More information about the Friam mailing list