[FRIAM] Musk’s America Party – Some Thoughts from Afar
steve smith
sasmyth at swcp.com
Thu Jul 10 14:53:03 EDT 2025
On 7/10/25 12:12 PM, glen wrote:
> Ha! I doubt you can stick to that story! 8^D I know you could
> re-generate *a* stream like that again. But how close would that
> stream be to this one? How reliably could you re-generate that stream
> given the same or similar prompt? Say what we will about the LLMs, but
> they are way more reliable than we are, even at high temp.
yes, I am an unreliable (chaotic) hose snaking around under
pressure-spew. I am glad you recognized the implicit tongue-in-cheek
in the "sticking to it" (or regenerating it) business. Every *good*
/Just So/ story is bespoke to the moment, and in my case generated JIT
(just in time)... as such stories *were* intended to be used back in
the day when we recycled our old stories to fit new (nuancedly so often)
contexts. Just ask Br'er Rabbit?
While LLMs are somewhat "reliable" (repeatable) I am naturally very
inspired by *their* ability to "spew" in all directions at once
(sensitive dependence on initial conditions). The (multi)bifurcation
paths they are capable of following are legion and spectacular (at least
to this meat-space confined creature that is me)...
I shifted a longstanding discussion from George to Claude recently and
was amazed at how much more in-tune Claude was. It was about my mental
model/hypothesis of LLM training sets as a Plenum and the resulting
attentional spaces (implied and exposed) between us as we discourse
being a family of manifolds, maybe a "sheave?" or "fiber-bundle" for the
mathHoles among us? The *manifold* it co-explored/created with me was
wonderfully complementary to the one(s) George has... I haven't tried
to resolve them against one another directly, but there is a marked
stylistic difference between how the two are willing/able/motivated to
discuss this with me.
> Along similar lines, the "AI WEAPONS" essayist made a comment that he
> was confident that book "The Human-Machine Team" was *not* written by
> AI because the writing was so bad. The LLMs' interpolation functions
> make it difficult to get pathological styles back out.
Yah, could an AI have generated the 90 days/90 deals trade-deal-letters
DJT has spewed across our (former) allies and frienemies this season?
Could it generate his (or Elno's) spew of dia-tripe (gratuitous
neologism of the moment) on Truth-Special or eXno? I think it could
generate a *parody* or caricature thereof, but I think even the
signature of that spew would be recognizeable as a forgery?
> They can "read" a tranche of bad l33t coded screeds on 4chan and will
> still re-generate something akin to a (l33t coded) philosophy
> professor because the centroid pulls the interpolation toward a more
> stable region of the space. A recent experiment of mine was to use an
> LLM to analyze the linguistic style of a person's spoken language,
> then use a different LLM to render some information into a new
> document using that style. When the person *read* it, they objected
> that it didn't match their "voice" at all ... a bit like how
> uncomfortable many of us can be when we listen to our own recorded
> voice. Even given that written "voice" is almost always very different
> from spoken "voice", whether the LLMs got it more right than wrong is
> up in the air because the person may have a self-image distant from
> their self. What's that Butthole Surfers line? "You never know just
> how [to|you] look, Through other people's eyes."
<halfhearted Snark> I'm glad to get more hints of how YOU are burning
our grandchildren's carbon/entropy budget through data centers...
probably better than the old-man chatter the rest of us are engaged in
with Dan and George and Yawe and... </Snark>
I would be fascinated to hear more about some of your experiments in
these realms, your allusions to "waiting for several LLMs to report
back" (probably butchered the quote) intrigued me. I haven't found
much good (accessible to me) work on interpretability of LLM training
and engagement, but suspect there are thousands of ad-hoc
projects/experiments afoot at any moment? Is that what the AI gurus are
mining now? Our parallelized experiments? Crowdsourcing...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250710/2dc85c85/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_0xD5BAF94F88AFFA63.asc
Type: application/pgp-keys
Size: 3118 bytes
Desc: OpenPGP public key
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250710/2dc85c85/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250710/2dc85c85/attachment.sig>
More information about the Friam
mailing list