[FRIAM] Alignment (with what?)
steve smith
sasmyth at swcp.com
Wed Jun 4 15:19:33 EDT 2025
DaveW -
I do know that we are definitely aligned (you and I) in our (early)
reading choices/habits and probably our (at least early) consequential
tendencies and biases and the heroes of Van Vogt and Brunner and the
unmentioned Nicholas van Rijn of Poul Anderson?
Where we might diverge is in my latent /self-loathing-liberal/
sympathies with all things not-me... through my youthful embrace of
hyper-individualism (still habitual in many contexts) I came to see some
of it's folly and/or toxicity. RAH's (via LL's voice) admonition to
"be able to lead" and "to be able to follow" were a good leavening to
the heavy starch of hyper-individualistic self-reliance and his
neo-western (pseudo-Martian) Vedic metaphysics in "Stranger") but
ultimately he still left me chafing at the hyper-human-chauvanism and
misplaced superficial elevation of women (and likely others
underspecified) in a fundamentally misogynistic mode/style.
I do agree that there is "a lot more to being human" than any known
reductionist conception/description really seems to begin to expose.
The demonstration-by-example of AI (particularly in the form of LLMs and
image transformers) does put that in stark contrast. Where we may
differ is that I (am willing to consider?) believe that all that makes
us so wonderfully "exceptional" is merely an extension of what makes
life itself and all things "emergent" qualitatively new at every level
of reconsideration/expression and that our expression-through-tech is
just another turn of that wheel of incarnation.
I do believe, for example, that for all our follies as *groups of
humans* we are on a road to something that transcends what any given
human can or does do. And I believe that the "technical" embedding of
same, whether it is the writings of the Sacred Texts (or great books) or
the intricate web of rules and regulations and practices and norms that
make up institutions (political, religious, academic, ...) can embody
and express this. All things digital and computational are (to me)
merely hyper-facilitators of the same, and therefore *capable* of
achieving quantitative thresholds which allow for the (inevitable)
emergence of qualitatative differences which make a difference.
While I am enamored (enraptured/ensorcled) to some degree with LLMs, I
don't absolutely need to impute onto them anything like the implied
level of *consciousness* I myself experience in spite of them being
*extremely* capable /stochastic parrots/ (at the absolute very
least?). But that is far from me wanting to declare that they
absolutely do not represent a proto-version or a step-along-the-way.
I am also acutely not-a-fan of being educated by (or educating) others,
but rather revel in the possibilities of co-development *with* others,
up to and including the uncanny-valleyed familiarity of LLMs. Can I
individually or we collectively so-evolve, co-emerge, co-arise
en-symbiosis, en-mutuality *with* this golem we have formed from the
electron-infused silicon-wafer clay of the earth?
I suspect you don't disagree in principle with much of what I am saying
here? The differences may be in detail and style.
- SASsafrass
> I agree.
>
> Of course, Lazurus was immortal (having fathered himself) and had time
> to learn all those skills. But why those skills and not a host of others?
>
> I too am a product of RAH, having read his entire corpus multiple
> times. However, my personal heroes tended to be Jubal Harshaw,
> Valentine Smith, Bernardo de la Paz, and even Mycroft (Mike) more than
> Lazurus. And these blended well with A.E. van Vogt's heros Gilbert
> Gosseyn (World of Null-A) and Eliot Grosvenor (Voyage of the Space
> Beagle), and Brunner's heroes, Nick Hafflinger (Shockwave Rider) and
> Lex (Polymath). These led to my early dedication to "know everything
> and experience (at least once) everything." Alexei Panshin's novel,
> Rite of Passage and its discussion of "ordinology" and "synthesis" as
> professions was also very influential.
>
> I am not so much a believer in human exceptionalism as I am convinced
> that there is a lot more to being human and for human potential than
> what is usually recognized. [AI advocates not only fail to recognize,
> but deny the possibility.] This is probably a result of my involvement
> in the Human Potential Movement when an undergraduate and with
> Mitchell's (the astronaut) Noetic Institute.
>
> All of this is background to one of my consuming interests of the
> moment: how to facilitate the "education" of human beings. Educate is
> in quotes because it is a poor approximation of what I mean: a
> synthesis of enculturation, facilitated self-learning, exploration,
> ... All influenced by experiments like Summerhill and the earlier,
> non-Christian-centric, Paidiea movement.
>
> davew
>
>
> On Wed, Jun 4, 2025, at 10:26 AM, steve smith wrote:
>> DaveW, et alia -
>>> T/he Alignment Problem/, by Brian Christian
>>
>> I would say that Christian's piece here acutely represents what I'm
>> trying to re-conceive, at least for myself. His implications of
>> /Human Exceptionalism/ and a very technocentric focus which largely
>> avoids deeper political critiques about who gets to define
>> "alignment" and whose values are prioritized. It is a bias
>> oft-presented by those of us who are tech-focused/capable/advantaged
>> to reduce a problem to one we think we know how to solve (in a manner
>> that promotes our narrow personal interests).
>>
>> In the spirit of "anti-hubris", I was once strongly aligned with
>> Robert Heinlein's (RAH) "Human Chauvanist" or "Human Exceptionalism"
>> perspective as exhibited in his Lazarus Long (LL) character's
>> oft-quoted line:
>>
>> /"A human being should be able to change a diaper, plan an
>> invasion, butcher a hog, conn a ship, design a building, write a
>> sonnet, balance accounts, build a wall, set a bone, comfort the
>> dying, take orders, give orders, cooperate, act alone, solve
>> equations, analyze a new problem, pitch manure, program a
>> computer, cook a tasty meal, fight efficiently, die gallantly.
>> / /Specialization is for insects."/
>>
>> I can't say I don't still endorse the optimistic aspirations inspired
>> by LL's statement, it is the "should" that I am disturbed by. I am
>> a fan of generalism but in our modern society, acknowledge that many
>> if not most of us are in fact relatively specialized by circumstance
>> and even by plan and while we might *aspire* to develop many of the
>> skills LL prescribes for us, it should not be a source of shame or of
>> "lesser" that we might not be as broadly capable as implied. We
>> are a social species and while I cringe at becoming (more) eusocial
>> than we already are, I also cringe at the conceit of being order 10B
>> selfish (greedy?) individual agents with long levers, prying one
>> another out of our various happy places willy nilly.
>>
>> I also think the /hubris/ aspect is central. One of the major
>> consequences of my own "origin story" foreshadowed by my
>> over-indulgence in techno-optimistic SciFi of the "good old fashioned
>> future" style and particular RAH's work was that he reinforced my
>> Dunning-Kruger tendencies, both by over-estimating my own abilities
>> at specific tasks and narrowed my values to focus on those things
>> which I was already good at or had a natural advantage with. As a
>> developing young person I had a larger-than average physicality and a
>> greater-than-average linguistic facility, so it was easy for me to
>> think that the myriad things that were intrinsically easier for me
>> based on those biases were somehow more "important" than those for
>> which those things might be a handicap? I still have these biases
>> but try to calibrate for them when I can.
>>
>> My first "furrin" car (73 Honda Civic) was a nightmare for me to work
>> on because my hands were too big to fit down between the gaps amongst
>> all the hoses and belts and wires that (even that early)
>> smog-resistant epi-systems layered onto a 45mpg tiny vehicle such as
>> that. And you are all familiar with my circumloquacious style
>> exemplified by "I know you believe you understand what you think I
>> said, but I don't think you realize that what you heard was not what
>> I meant". While I might have been able to break a siezed or rusty
>> bolt loose on my (first car) 64-Tbird or (first truck) 68 F100
>> without undue mechanical leverage it was hell to even replace spark
>> plugs or re-attach an errant vacuum line on my Honda. And while I
>> might be able to meet most of my HS teachers on a level playing field
>> with complex sentence constructions (or deconstructions) or logical
>> convolutions, the same tendency made me a minor pariah among some of
>> my peers.
>>
>> Back to "alignment" and AI, I would claim that human institutions and
>> bureaucracy are a proto-instantiation of AI/ML, encoding into
>> (semi)automated systems the collective will and values of a culture.
>> Of course, they often encode (amplify) those of an elite few
>> (monarchy, oligarchy, etc) which means that they really do present to
>> the masses as an onerous and oppressive system. In a well
>> functioning political (or religious) system the institutional
>> mechanisms actually faithfully represent and execute the values and
>> the intentions of those who "own" the system, so as-by-design, the
>> better it works, the more oppressed and exploited the citizenry
>> (subjects) are. We should be *very* afraid of AI/ML making this
>> yet-more efficient at such oppression and exploitation *because* we
>> made it in our own (royalty/oligarchic) image, not because it can
>> amplify our best acts and instincts (also an outcome as perhaps
>> assumed by Pieter and Marcus and most of us often-times).
>>
>> I don't trust (assume) the first-order emergent "alignment" of AI (as
>> currently exemplified by LLMs presented through chatBot interfaces)
>> to do anything but amplify the existing biases that human systems
>> (including pop culture) exhibit. Even Democracy which we hold up
>> quite high (not to mention Free Markets, Capitalism, and even
>> hyperConsumerism,and hyperPopulism) is an abberant expression of
>> whatever collective human good might be... it tends to represent the
>> extrema (hyper fringe, or hyper-centroid) better than the full
>> spectral distribution or any given interest really. An
>> ill-concieved, human-exceptionalist (esp. first world,
>> techno-enhanced, wealthy, "human-centricity") giant lever is likely
>> to break things (like the third world, non-human species, the
>> biosphere, the climate) without regard to the fact that to whatever
>> extend we are an "apex intelligence" or "apex consciousness", we are
>> entirely stacked on top of those other things we variously
>> ignore/dismiss/revile as base/banal/unkempt.
>>
>> Elno's aspiration to help (make?) us climb out of the walls of the
>> petri-dish that is Terra into that of Ares (Mars) to escape the
>> consequences of our own inability to self-regulate is the perfect
>> example of human-exceptionalist-hubris gone wrong. Perhaps the
>> conceit is that we can literally divorce ourselves from the broad
>> based support that a stacked geo/hydro/cryo/atmo/biospheric
>> (eco)system provides us and live entirely on top of a techno-base
>> (Asteroid mining Belter fantasies even moreso than
>> Mars/Lunar/Venus/Belter Colonists?). ExoPlanetarian expansion is
>> inevitable for humanity (barring total premature self-destruction)
>> but focusing as much of our resources in that direction (ala Musk,
>> especially fueled by MAGA alignment in a MAGA-entrained fascist
>> industrial-state?) as we might be on the path to is it's own folly.
>> The DOGE-style MAGA-aligned doing so by using humble humans (and all
>> of nature?) as reaction-mass/ejecta is a moral tragedy and
>> fundamentally self-negating. Bannon and Miller and Musk and Navarro
>> and Noem and ... and the entire Trump clan (including Melania and
>> Barron?) are probably quite proud of that consequence, it is not
>> "unintended at all" but I suspect the average Red-Hat-too-tight folks
>> might not be so proud of the human suffering such will cause.
>>
>> Maybe those chickens (the ones not destroyed in industrial
>> egg-production-gone-wrong) are coming home to roost? Veterans
>> services, health-care-for-the-many, rural infrastructure
>> development, humble family businesses, etc might be on the verge of
>> failure/destruction in the name of concentrating wealth in Golf
>> Resorts, Royal Families, and Space Adventurers pockets? Or maybe we
>> are generally resilient to carry all of that on our backs (with AI to
>> help us orchestrate/choregraph more finely)? Many hands/heads/bodies
>> make light work even if it is not righteous (see pyramids?)
>>
>>
>> Bah Humbug!
>>
>> - Steve
>>
>>
>>
>> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -.
>> --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives: 5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>> 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
>>
>>
>> *Attachments:*
>>
>> * OpenPGP_0xD5BAF94F88AFFA63.asc
>> * OpenPGP_signature.asc
>>
>
>
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. / ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoomhttps://bit.ly/virtualfriam
> to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIChttp://friam-comic.blogspot.com/
> archives: 5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250604/c7d16888/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_0xD5BAF94F88AFFA63.asc
Type: application/pgp-keys
Size: 3118 bytes
Desc: OpenPGP public key
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250604/c7d16888/attachment.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature.asc
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250604/c7d16888/attachment.sig>
More information about the Friam
mailing list