[FRIAM] Interview with Jeremy Howard

jon zingale jonzingale at gmail.com
Thu Feb 25 18:49:27 EST 2021


I hope his comments didn't stymie you and your group from continuing to make
progress on whatever idea y'all were expressing on /Usenet/. If we could
only travel back in time and play this  later recording
<https://www.youtube.com/watch?v=EI0NXTrS5Pw&ab_channel=InfiniteHistoryProjectMIT&t=440s>  
for earlier Minsky.
/..and I notice that a lot of people keep saying, "well, I thought of that a
long time ago", and that sort of thing and they keep trying to get
recognition, and why bother?/
To clarify, I feel that Minsky anticipated Waibel (TDNN), Hopfield (RNN),
Hochreiter (LSTM), Herbert (ESN), Et. Al like Newton anticipated Cauchy,
Riemann, Weierstrass, Et. Al. That is to say, perhaps if one squints hard
enough. I do sympathize with Minsky when he laments the absence of
philosophical inquiry during this /wet lab/ era of AI, and I very much enjoy
listening to his interviews on YouTube. It would have been a real pleasure
to have known him.
I appreciate Glen's comment for orienting the discussion around the
phenomena, the networks themselves. It is in this sense that /wet lab/ seems
like an apt analogy. Inevitably, it is over these grounds that any
meaningful higher-level theory must relate. For instance, linguists ask /how
is it that children learn from so few examples?/ Some posit highly
specialized and innately given structures (Chomsky) while others look for
highly specialized and external social networks (Tomasello). In the field of
machine learning something of the same appears to be happening. We are
pleased that so much can be encoded informationally in the data, and we look
to ways that such information can be /encoded as or afforded by/ the
structure in a network. To my mind, the /unreasonable effectiveness of data/
points to a dual quality relative to a kind of impedance matching. That an
abundance of /errorful/ and /incomplete/ data sets are better than a few
/pristine/ data sets speaks to me of /the unreasonable robustness of
information/, ie., the difficulty of accidentally distilling the
informational content away from the data, or thought of another way, being
unconcerned that some sample or other will exhibit randomness. Thankfully,
richly structured sources produce rich information, and this seems
especially so for natural language. Thanks for keeping this ball rolling :)



--
Sent from: http://friam.471366.n2.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20210225/e0d9c48f/attachment.html>


More information about the Friam mailing list