[FRIAM] "I hope I'm wrong. But that text reads like it was generated by an LLM"

Stephen Guerin stephen.guerin at simtable.com
Mon Jan 27 19:14:05 EST 2025


On Mon, Jan 27, 2025 at 1:08 PM Santafe <desmith at santafe.edu> wrote:

> But to suppose they _already_ contain everything there is to be understood
> is not a position I would take w.r.t. anything else we have anywhere in
> science.  They contain or represent whatever they do.  I don’t know how
> much that is, and what more it leaves to be found.  I would be amazed if it
> were “everything”, since nothing else in science ever has been before.
>

I'm trying to follow the thread.  Was there a previous post you are
addressing with "But to suppose they _already_ contain everything there is
to be understood"

Safety guardrails currently prevent linking consumer AI like chatGPT,
Claude and Perplexity to physical embodiment with sensor/actuators to grow
intelligence from experimental designs and execution. So yes, here there is
a kind of snapshot of "knowledge" albeit one that is occasionally updated
and supplemented with retrieval-augmented generation (RAT).

The guardrails are less present in more professional/custom use of AI. eg
autonomous vehicles where there is more ecological perception/action cycle
for development of embodied intelligence and scientific experimentation.
And for our own work, we want AI learning for realtime observation fusion
of camera sensor (perception) networks with robotic PTZ actuators (action).
This also includes citizens holding their mobile phones and serving as
meat-puppet actuators for lat/long/height and yaw pitch roll as we build
decentralized "situational intelligence" durign incident as well as pre and
post intel intelligence to coordinate action.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20250127/18c4e5bf/attachment.html>


More information about the Friam mailing list