[FRIAM] neural operators seem promising
Steve Smith
sasmyth at swcp.com
Wed Jul 16 13:19:18 EDT 2025
* Anima's presentation reminded me quite nicely of the Numenta/Redwood
work of Jeff Hawkins et al? Cortical columns, etc.
* Did Harold Morowitz make a strong assertion to the tune: "we learned
more about thermodynamics from steam-engines than vice-versa"?
EricS or StephenG might have first-hand knowledge?
* Is this theory/practice dichotomy just another form of
meta-scaffolding in evolution (of any system) with the cut-and-try
providing the mutation/selection and the theory/formalism binding
the "lessons learned" into well... "lessons learned"?
On 7/16/2025 2:12 AM, Pieter Steenekamp wrote:
> Both the video of Anima Anandkumar’s Stanford seminar and her
> scientific paper on Neural Operators really got me excited—the ideas
> feel fresh and powerful.
>
> The paper is quite technical and digs into the math behind
> Neural Operators, without talking much about robotics. In her talk,
> though, she clearly links the work to robots, and it sounds as if
> robotics is a big focus for her team.
>
> What jumped out at me is how different her style is from Elon Musk’s
> approach with Tesla’s Optimus robot. Anandkumar begins with deep
> theory, building firm mathematical foundations first. Musk takes a
> “just build it” path—make it, test it, break it, fix it, and keep going.
>
> This contrast reminds me of engineering school and the Faraday‑Maxwell
> story. Faraday was the hands‑on experimenter who uncovered the basics
> of electricity and magnetism through careful tests. Maxwell came later
> and wrote the elegant equations that explained what Faraday had
> already shown.
>
> So I wonder: will the roles flip this time? Will deep theory from
> researchers like Anandkumar guide the breakthroughs first, with
> practice following? Or will practical builders like Musk sprint ahead
> and let theory catch up afterward?
>
> Either way, watching these two paths unfold side by side is thrilling.
> It feels like we’re standing on the edge of something big.
>
> On Wed, 16 Jul 2025 at 04:11, Jon Zingale <jonzingale at gmail.com> wrote:
>
> Even if just for the freedom of scale, learning infinite
> dimensional function spaces, etc...
>
> https://www.youtube.com/watch?v=caZyFlSSKtI
> https://arxiv.org/pdf/2506.10973
>
>
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. ---
> -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: 5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
>
>
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoomhttps://bit.ly/virtualfriam
> to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIChttp://friam-comic.blogspot.com/
> archives: 5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250716/f4ad1a34/attachment.html>
More information about the Friam
mailing list