[FRIAM] deducing underlying realities from emergent realities
steve smith
sasmyth at swcp.com
Mon Nov 18 12:22:43 EST 2024
An interesting self-similar discussion to the topic of the discussion?
So we have deduced the generative structure of Sabine's argument to
be/"just" a mish mash of semantic concepts arranged to fit her
conservative narrative/? We don't expect ( or need ) to /find a better
structure/ to explain her behaviour?
Is her behaviour in some sense a part of an emergent, qualitatively
distinct paradigm?
And is my offering here just a mish-mash of dense semantic concepts
arranged to be disruptive or self-aggrandizing?
glen wrote:
> Yeah, it's kinda sad. Sabine suggests someone's trying to *deduce* the
> generators from the phenomena? Is that a straw man? And is she making
> some kind of postmodernist argument that hinges on the decoupling of
> scales? E.g. since the generator can't be deduced [cough] from the
> phenomena, nothing means anything anymore?
>
> What they're actually doing is induction, not deduction. And the end
> products of the induction, the generative constraints, depend
> fundamentally on the structure of the machine into which the data is
> fed. That structure is generative, part of the forward map ...
> deductive. But it's parameterized by the data. Even if we've plateaued
> in parameterizing *this* structure, all it implies is that we'll find
> a better structure. As Marcus and Jochen point out, it's really the
> same thing we've been doing for decades, if not centuries, in many
> disciplines.
>
> So her rhetoric here is much like her rhetoric claiming that "Science
> if Failing". It's just a mish-mash of dense semantic concepts arranged
> to fit her conservative narrative.
>
> On 11/17/24 08:45, Roger Critchlow wrote:
>> Sabine is wondering about reported failures of the new generations of
>> LLM's to scale the way the their developers expected.
>>
>> https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html
>> <https://backreaction.blogspot.com/2024/11/ai-scaling-hits-wall-rumours-say-how.html>
>>
>>
>> On one slide she essentially draws the typical picture of an emergent
>> level of organization arising from an underlying reality and asserts,
>> as every physicist knows, that you cannot deduce the underlying
>> reality from the emergent level. Ergo, if you try to deduce physical
>> reality from language, pictures, and videos you will inevitably hit a
>> wall, because it can not be done.
>>
>> So she's actually grinding two axes at once: one is AI enthusiasts
>> who expect LLM's to discover physics, and the other is AI enthusiasts
>> who foresee no end to the improvement of LLM's as they throw more
>> data and compute effort at them.
>>
>> But, of course, the usual failure of deduction runs in the opposite
>> direction, you can't predict the emergent level from the rules of the
>> underlying level. Do LLM's believe in particle collliders? Or do
>> they think we hallucinated them?
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20241118/504e8b43/attachment.html>
More information about the Friam
mailing list