[FRIAM] "I hope I'm wrong. But that text reads like it was generated by an LLM"

steve smith sasmyth at swcp.com
Tue Jan 28 09:56:18 EST 2025


On 1/28/25 7:35 AM, glen wrote:
> F = Frost. Robert Frost. 
> https://www.poetrysoup.com/poetry/resources/documents/education-by-poetry-robert-frost.pdf 
> (At least I hope that's right.)
Good call
>
> I hit the breaking point because "F" is a token in 
> ∀v_i(v_i=v_1'⊃∀v_1(v_1=v_i⊃F)) and that F is the primary reason Claude 
> (3.5-sonnet, not so much opus), GPT (4o or o1), and Perplexity failed 
> to interpret the formula *well*, at least zero shot. DeepSeek-R1 was 
> the most fascinating. Even though it failed in the end, it was the 
> only one that invoked schema, which is the context in which Smullyan 
> wrote it. OMG, though. Reading the DeepSeek output was like listening 
> to a tween describe some complicated relationship amongst 
> friends/frenemies.
>
> As a result, I also spent a lot of time yesterday testing llama3.3, 
> deepseek-r1, and a host of LDMs for unsafe content. It was pretty 
> weird. I could get the LDMs to render suicide by wrist slitting, 
> lynching, nazi stuff, a dead junkie with a needle in his arm, etc. But 
> I couldn't get it to render an image of a man pointing a gun at his 
> temple or with a shotgun in his mouth. I suspect that was inadequate 
> prompting or maybe bias in the training data, not alignment.
And I thought *I* was morbid in my ideations of ways to self-dispose of 
my corporeal vessel once the spark of life flees for the cosmos (or some 
meatless rendering of collective consciousness)?




More information about the Friam mailing list