<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=utf-8"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
font-size:10.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
span.EmailStyle18
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;
mso-ligatures:none;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style></head><body lang=EN-US link=blue vlink=purple style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal><span style='font-size:11.0pt'>Anyone that has looked at job ads for these companies can see that they are putting extensive effort into reinforcement learning and developing focused training. It’s not like one is limited to training on stuff on the internet (or even copyrighted physics textbooks). They can teach LLMs how programming/physics/whatever works by giving example programs and then running them. (This isn’t the same thing as using a LLM to extend data.) The robot taxi companies have extensive training that is simulated physics, for example.<br><br>In terms of copyright material, I see the Atlantic is providing their archives: <br><br><a href="https://www.theatlantic.com/press-releases/archive/2024/05/atlantic-product-content-partnership-openai/678529/">https://www.theatlantic.com/press-releases/archive/2024/05/atlantic-product-content-partnership-openai/678529/</a><br><br><o:p></o:p></span></p><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p><div style='border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in 0in 0in'><p class=MsoNormal style='margin-bottom:12.0pt'><b><span style='font-size:12.0pt;color:black'>From: </span></b><span style='font-size:12.0pt;color:black'>Friam <friam-bounces@redfish.com> on behalf of Roger Critchlow <rec@elf.org><br><b>Date: </b>Sunday, November 17, 2024 at 8:46 AM<br><b>To: </b>The Friday Morning Applied Complexity Coffee Group <Friam@redfish.com><br><b>Subject: </b>[FRIAM] deducing underlying realities from emergent realities<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'>Sabine is wondering about reported failures of the new generations of LLM's to scale the way the their developers expected.<o:p></o:p></span></p><div><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'> <a href="https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html">https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html</a><o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'>On one slide she essentially draws the typical picture of an emergent level of organization arising from an underlying reality and asserts, as every physicist knows, that you cannot deduce the underlying reality from the emergent level. Ergo, if you try to deduce physical reality from language, pictures, and videos you will inevitably hit a wall, because it can not be done.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'>So she's actually grinding two axes at once: one is AI enthusiasts who expect LLM's to discover physics, and the other is AI enthusiasts who foresee no end to the improvement of LLM's as they throw more data and compute effort at them.<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'>But, of course, the usual failure of deduction runs in the opposite direction, you can't predict the emergent level from the rules of the underlying level. Do LLM's believe in particle collliders? Or do they think we hallucinated them?<o:p></o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'><o:p> </o:p></span></p></div><div><p class=MsoNormal><span style='font-size:11.0pt'>-- rec --<o:p></o:p></span></p></div></div></div></body></html>