[FRIAM] AI

Pieter Steenekamp pieters at randcontrols.co.za
Fri Jun 20 23:15:20 EDT 2025


I'm a bit hesitant to go much further than saying it might happen within
five years, because honestly, I have very low confidence in my ability to
make accurate predictions about when — or even if — AI will surpass human
intelligence. But hey, we’re all speculating in good spirits, so here’s my
two cents:

With very low confidence in my own forecast, I predict that within five
years, we’ll have an AI that’s smarter than the smartest human alive today
— though not radically smarter. Think of it like comparing a top university
professor to a hardworking high school student who has a personal tutor and
still struggles to pass. It won’t be like comparing a human to an amoeba —
more like a few rungs up the same ladder, not a leap into another
dimension. And also not in all aspects of human intelligence - true
creativity will still elude them.

Here’s what motivates my prediction:

1. The current AI architecture is still very basic.
Modern AI is mostly built on simple artificial neurons stacked in
sequential layers — a kind of top-down, hierarchical design. The human
brain, by contrast, is a bottom-up system with a staggering degree of
interconnectedness. In our brains, any neuron could, in theory, connect to
any other neuron. It’s a complex, messy web — and that “messiness” seems to
be a feature, not a bug.

I won’t attempt the math, but the possible combinations in a fully
interconnected system are orders of magnitude beyond what our current
layer-based architectures can achieve. So to get an AI that’s to humans
what humans are to apes, we’ll need a radically different neural structure
— not just a bigger one.

2. Still, with clever hacks, we’ll get surprisingly far.
Even with today’s limited architecture, I think we’ll soon see AI systems
capable of holding down very high-level roles — say, serving as
second-in-command to a human CEO, taking care of day-to-day operations
while the CEO focuses on vision and strategy (with a human board still
making the big calls). You might also have a robotic butler that
understands and fulfills your wishes better than any human could.

What I don’t see coming soon is an AI you can form a deep, personal
relationship with — not in any real, human sense. Maybe we’ll get one that
can run a country better than a certain T-guy... but that’s such a low bar,
it’s not exactly a convincing benchmark.

On Fri, 20 Jun 2025 at 23:30, Marcus Daniels <marcus at snoutfarm.com> wrote:

> Perfect.  I think I might like Ross Douthat even less.  😊
>
>
>
> *From: *Friam <friam-bounces at redfish.com> on behalf of Santafe <
> desmith at santafe.edu>
> *Date: *Friday, June 20, 2025 at 2:23 PM
> *To: *The Friday Morning Applied Complexity Coffee Group <
> friam at redfish.com>
> *Subject: *Re: [FRIAM] AI
>
> This is an interesting direction.
>
>
>
> On Jun 21, 2025, at 5:46, Jochen Fromm <jofr at cas-group.net> wrote:
>
>
>
> I believe it will be possible.
>
>
>
> Will it be a good idea? I don't know. In science fiction movies AIs often
> start to kill their creators. "Ex machina" for example is the story of such
> an AI developed by the CEO of a large corporation
>
> https://youtu.be/sNExF5WYMaA
>
>
>
> Then there is the possibility of massive unemployment because AI takes
> away the good, creative jobs. Claude's capabilities in programming are
> impressive. Stackoverflow is already in a crisis because developers ask
> ChatGPT, Gemini or Claude instead. More and more employees will lose their
> jobs. It doesn't look good.
>
> https://www.cnn.com/2025/06/17/business/amazon-ai-human-employees-jobs
>
>
>
> Following the article Jochen forwarded, there is another in the same
> channel:
>
> AI warnings are the hip new way for CEOs to keep their workers afraid of
> losing their jobs | CNN Business
> <https://edition.cnn.com/2025/06/18/business/ai-warnings-ceos?iid=cnn_buildContentRecirc_end_recirc>
>
> edition.cnn.com
> <https://edition.cnn.com/2025/06/18/business/ai-warnings-ceos?iid=cnn_buildContentRecirc_end_recirc>
>
> It says what it says.  I won’t tie myself to or away from it.
>
>
>
> I have been thinking for some weeks about the “pro-natalist” crowd, since
> they came up a few months ago.
>
>
>
> As in all this, people can come up with a narrative for pretty-much any
> position, and we are left (if we want to say something meaningful about
> causation) to figure out which, if any, of these narratives has anything to
> do with why something becomes “a movement”, to which many of the
> narrative-spinners are just fabric and hangers-on.  So there can be
> disingenuous (self-disingenuous?) saps and shills like Ross Doubthat of NYT
> who have all sorts of old-fashion-values arguments about natalism.
>
>
>
> But to me the structure is: they are pushing somebody to have lots of
> babies at exactly the time they are engineering a world to eliminate
> anything like a human life for the babies already had.  I don’t think the
> timing-congruence of those two things is coincidence and unconnected to
> causation.
>
>
>
> It’s clear that falling birthrates seem like a godsend if one thinks
> population must decrease, but doesn’t want that to happen by wars and
> disease epidemics, with lots of acute suffering.  So for whom is it really
> not a godsend?  Well, for people who can’t live without “being supported”.
> There are real suffering-issues for aging populations who currently depend
> on getting crumbs from the big economies for their subsistence.  But we
> probably produce enough, and have enough legacy-stuff, that if we really
> wanted their lives to have manageable suffering, we could achieve that
> through redistribution for however long it will take the various
> generations to die off.  For whom, then, is redistribution off the table
> and they need the “economy” (whatever that is turning into) to be big?  The
> ones who take almost-all of it, for whom there is no redistribution left to
> capture.
>
>
>
> So the pro-natalist movement, in the current context of the feudalization
> of everything, seems to me like it drives paleo-feudalism into something
> that is no longer distinct from arguments for slavery, and maybe even
> stronger than that, to arguments for something more like livestock.
>
>
>
> Dunno.  Probably I just repeat statements of the obvious, or things that
> are already in the air all around us.
>
>
>
> Eric
>
>
>
>
>
>
>
>
>
>
> .- .-.. .-.. / ..-. --- --- - . .-. ... / .- .-. . / .-- .-. --- -. --. /
> ... --- -- . / .- .-. . / ..- ... . ..-. ..- .-..
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250621/36be049f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gettyimages-1445795074.jpg
Type: image/jpeg
Size: 31468 bytes
Desc: not available
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250621/36be049f/attachment.jpg>


More information about the Friam mailing list