<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>On 12/7/23 7:51 AM, glen wrote:</p>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com">We need
less *trust* and more *trustworthiness*. What I meant by "reality
distortion field" seems different from what you meant. I meant the
effect of being ensconced in privilege, having billions of
dollars, swimming in an ocean of sycophants, etc. Musk's reality
is severely distorted.
<br>
</blockquote>
<blockquote>touche' and bravo. yes, this is yet-more-relevant and
yes to <i>trust V trustworthiness</i>. I generally trust
everyone to pursue their own self-interest but what I don't trust
as clearly is my understanding of that self-interest and their
level of enlightenment as they pursue it which convolved with
their alignment to my idea of a "greater good" would seem to be
their "trustworthiness".<br>
</blockquote>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com">
<br>
Of course, I think a part of the TESCREAL club's rhetoric is
similar to revelatory religions like Catholicism or Scientology
...</blockquote>
<blockquote>
<p>TESCREAL, the acronym, is new to me but I appreciate the
cluster/aggregation it offers. I'm a sucker for all things
hopeful/futurist/optimistic by some measure, yet also rather
allergic (the allergy/addiction duality). From the following
paper:</p>
<p><a
href="https://akjournals.com/view/journals/2054/aop/article-10.1556-2054.2023.00292/article-10.1556-2054.2023.00292.xml"
class="moz-txt-link-freetext">https://akjournals.com/view/journals/2054/aop/article-10.1556-2054.2023.00292/article-10.1556-2054.2023.00292.xml</a><br>
</p>
<blockquote>
<p><i><span
style="color: rgba(0, 0, 0, 0.87); font-family: Roboto, sans-serif; font-size: 16px; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; word-spacing: 0px; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(255, 255, 255); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">The
backbone of this worldview is the TESCREAL bundle of
ideologies—an acronym coined by the critical AI scholars
Émile Torres and Timnit Gebru to describe an interrelated
cluster of belief systems: transhumanism, Extropianism,
singularitarianism, cosmism, Rationalism, Effective
Altruism, and longtermism.</span></i></p>
</blockquote>
</blockquote>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com"> or even
occult societies and heavy psychedelics users, where initiates
have a distorted view but the masters who've "studied" for a long
time have a clearer understanding of reality. A wealthy man once
told me, "Money is like air. It's everywhere. The difference
between you and I is that I know how to build engines that harvest
and concentrate it." He clearly felt like he had a better
understanding of reality than me. The rhetoric inverts.
<br>
<br>
I feel the same way about inter-species mind reading. When I see
humans engineer their local ecology (e.g. damming a river or
introducing a biocontrol species), I don't see humans
understanding biology *better* than, say, the rats whose day to
day lives might be intensely impacted. I see the rats as having
the clear view and the humans as having the "distorted" view.
Musk, Thiel, and all the rest seem to think they're Hari Seldons.
</blockquote>
<blockquote>In fact, it is beyond inter-species mind-reading, it is
individual-of-an-arrogant-species reading the mind of Gaia (or
some significant subset). I think I take your point however. I've
been re-reading Henry Petroski's "To Engineer is Human" and it is
flooded with examples of the hubris implied here.<br>
</blockquote>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com">But we
rats understand that *luck* is the primordial force and those with
it (the lucky) are so badly skewed they can't see their hand in
front of their face. Perhaps the continually unlucky are also
badly skewed? Only those of us who can narratively map our
wandering from luck to unluck and back are best situated to
understand reality?
<br>
</blockquote>
<blockquote>I've put that under my hat and am letting it try to soak
in. It may take a while. I sense something profound in it but
haven't been able to absorb/parse/internalize it yet.<br>
</blockquote>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com">
<br>
The same difference exists between, say, a front-end developer and
a "close to the metal" embedded systems developer. The former is
closer to ideal computation, computronium, as it were. The latter
is closer to the actual world, where the rubber meets the road.
Which of the two has the more distorted field? Or perhaps only
full stack (writ large) developers experience the least
distortion?<br>
<br>
Arcing back to conceptions of openness. Here are some indices that
seem more trustworthy than whatever field being whipped up by the
byzantine AI Alliance:
<br>
<br>
<a class="moz-txt-link-freetext" href="https://opening-up-chatgpt.github.io/">https://opening-up-chatgpt.github.io/</a>
<br>
</blockquote>
<blockquote>I had no idea how many list-worthy text-generators were
out there (I only recognized a few) nor the number of categories
to be open (or not) within! I'm a sucker for a good taxonomy or
maybe more to the point, partitioning or embedding space as a way
to get some bearings/orientation on the larger landscape.<br>
</blockquote>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com"><a class="moz-txt-link-freetext" href="https://hai.stanford.edu/news/introducing-foundation-model-transparency-index">https://hai.stanford.edu/news/introducing-foundation-model-transparency-index</a>
<br>
</blockquote>
<blockquote>
<p>I was not aware (either) of the term Foundation Model... useful
and interesting (from the website): <br>
</p>
<blockquote>
<h4 class=""
style="box-sizing: border-box; margin-top: 0px; margin-bottom: 0.5rem; font-weight: 400; line-height: 1.7em; color: rgb(52, 63, 82); font-size: 1.2em; word-spacing: 0.1rem; letter-spacing: -0.01rem; font-family: Manrope, sans-serif; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; widows: 2; -webkit-text-stroke-width: 0px; white-space: normal; background-color: rgb(254, 254, 254); text-decoration-thickness: initial; text-decoration-style: initial; text-decoration-color: initial;"><i><font
face="Times New Roman, Times, serif">In recent years, a
new successful paradigm for building AI systems has
emerged: Train one model on a huge amount of data and
adapt it to many applications. We call such a model a
foundation model.</font></i></h4>
</blockquote>
</blockquote>
<p></p>
<br>
<blockquote type="cite"
cite="mid:8e6266d5-69d0-46e2-977b-f7c91907251a@gmail.com">
<br>
<br>
On 12/6/23 10:47, Steve Smith wrote:
<br>
<blockquote type="cite">As the habitual tangenteer that I am, I'm
left reacting to the phrase "Musk's reality distortion
field". Tangent aside, I do very much appreciate Glen's take
on this and found the multiple references (much more on-topic
than my tangential riff here) interesting and useful. I too
hope Stallman will weigh in and wonder what the next evolution
of EFF might look like or be replaced by in this new
evolutionary landscape at the intersection of tech and culture?
<br>
<br>
I'm hung up, the last few years, on Yuval Harari's
Intersubjective Reality
<a class="moz-txt-link-rfc2396E" href="https://medium.com/amalgamate/inter-subjective-realities-64b4f6716f72"><https://medium.com/amalgamate/inter-subjective-realities-64b4f6716f72></a>
as derived from the social science Intersubjectivity
<a class="moz-txt-link-rfc2396E" href="https://en.wikipedia.org/wiki/Intersubjectivity"><https://en.wikipedia.org/wiki/Intersubjectivity></a>.
<br>
<br>
When I first heard Harari's usage/coinage I reacted to it
somewhat the way I did to Kellyanne Conway's Alternative Facts
<a class="moz-txt-link-rfc2396E" href="https://en.wikipedia.org/wiki/Alternative_facts"><https://en.wikipedia.org/wiki/Alternative_facts></a>, but I
now deeply appreciate what they are all alluding to, some more
disingenously than others.
<br>
<br>
I don't disagree that Musk's every action and statement has the
effect of "distorting reality" but it is our /Intersubjective
Reality/ that is being distorted, not the reality that most of
us were trained/steeped in via the philosophical tradition of
/Logical Positivism
<a class="moz-txt-link-rfc2396E" href="https://plato.stanford.edu/entries/logical-empiricism/"><https://plato.stanford.edu/entries/logical-empiricism/></a>/.
Others here were (Social Sciences, Humanism) probably trained up
and steeped more in Phenomenology
<a class="moz-txt-link-rfc2396E" href="https://plato.stanford.edu/entries/phenomenology/"><https://plato.stanford.edu/entries/phenomenology/></a> and
more comfortable with Intersubjective Reality.
<br>
<br>
I find the likes of Musk or Trump or Altman or ( * ), as the
/Personality/ in "Cult of Personality" in much the way star this
recently discovered in-sync planetary system
<a class="moz-txt-link-rfc2396E" href="https://mashable.com/article/nasa-exoplanets-orbit-star-sync"><https://mashable.com/article/nasa-exoplanets-orbit-star-sync></a>
exists (see below). With the planets' orbits finding pairwise
(and more generally n-wise) resonances, all (presumably) coupled
exclusively by gravity (and synced through internal dissipative
tidal forces?).
<br>
<br>
Is chatGPT or OpenAI or AIAlliance ( or, or, or, . . . ) yet
another species of celestial body in an orbital dance?
<br>
<br>
Musk, of course, operates in a higher-dimensional field of
forces with Tweets (X's?), public appearances, financial
transactions, and launch/contract/release announcements as the
"intermediate vector particles" and the sturm and drang of
individual drama trauma among the companies, organizations and
individuals who are effected by it all as the internal
dissipative forces.
<br>
<br>
Mashable article on in-sync planetary system
<a class="moz-txt-link-rfc2396E" href="https://mashable.com/article/nasa-exoplanets-orbit-star-sync"><https://mashable.com/article/nasa-exoplanets-orbit-star-sync></a>
<br>
<br>
<a class="moz-txt-link-freetext" href="https://science.nasa.gov/missions/tess/discovery-alert-watch-the-synchronized-dance-of-a-6-planet-system/">https://science.nasa.gov/missions/tess/discovery-alert-watch-the-synchronized-dance-of-a-6-planet-system/</a>
<br>
<br>
On 12/6/23 10:57 AM, Pietro Terna wrote:
<br>
<blockquote type="cite"> Genius!
<br>
<br>
<br>
Il 06/12/23 15:11, glen ha scritto:
<br>
<blockquote type="cite">For those of us who refuse to
contribute to Musk's reality distortion field:
<a class="moz-txt-link-freetext" href="https://thealliance.ai/">https://thealliance.ai/</a>
<br>
<br>
Yeah, it's interesting. 2 questions came to my mind: 1)
Where is Mozilla? Are they a part of it? And 2) "open" is
not a simple concept. Is it possible that so many
organizations have a clear understanding of what it means?
If so, what do they mean? We've seen, over and over again, a
kind of exploitation of Utopian values, especially in
infrastructure-level software. (I'd love to get Stallman's
opinion.)
<br>
<br>
One way to clarify someone's position on their private
conception of "open" is to ask how they feel about limits to
the exportation of encryption software.
<a class="moz-txt-link-rfc2396E" href="https://www.eff.org/deeplinks/2019/08/us-export-controls-and-published-encryption-source-code-explained"><https://www.eff.org/deeplinks/2019/08/us-export-controls-and-published-encryption-source-code-explained></a><br>
<br>
Another tack is to ask how they feel about fake news, trust
in institutions, free speech, platforming, etc.
<br>
<br>
IDK. The AI Alliance smells, to me, kinda like more TESCREAL
[1], ripe for exploitation and *-washing [2] by the
privileged. If a tech is open and stays open, it'll most
likely do so because individuals commit to it, not because
some meta-corp of mega-corps get together as "allies". But
I'm a bit cynical.
<br>
<br>
[1] Transhumanism, Extropianism, Singulatarianism, Cosmism,
Rationalism, Effective Altruism, Longtermism
<br>
[2] Green-washing (fossil fuel lobbyists at cop28),
ethics-washing ("ai safety"), dei-washing (sensitivity
training), etc.
<br>
<br>
On 12/5/23 23:47, Pietro Terna wrote:
<br>
<blockquote type="cite">Dear all,
<br>
<br>
what about the post below?
<br>
<br>
Star Wars?
<br>
<br>
</blockquote>
</blockquote>
</blockquote>
</blockquote>
<br>
</blockquote>
</body>
</html>