[FRIAM] A distinguishing feature of living entities
Steve Smith
sasmyth at swcp.com
Mon May 29 17:13:19 EDT 2023
On 5/29/23 1:32 PM, Marcus Daniels wrote:
> I believe that Zoox, Cruise, Waymo, and Tesla all use these techniques.
I was involved in a project in the 90s where I was introduced to the
degenerate form: Trigram Analysis. It was fascinatingly effective
*and* fairly easy to understand/explain through a few examples
(especially when compressing out white-space and therefore picking up
word endings juxtaposed with word-beginnings ( "ng z" or "ng q" or "ls
a" or "a sl") providing surprisingly good word-word context which in
turn correlates "forward" to limited phrase context (i.e. such as when
"ng" words (verbs?) precede words starting with any particular unusual
consonant).
>
> Also my dog does not consider the Roomba to be alive. She performed
> some experiments.
My previous dog ( a little skitchy to start with ) never liked/trusted
our roboVac (Roomba-knockoff) and I would say treated it as if it were
alive. My previous cat (very old, very experienced, still active hunter
at 20) never gave it a second look. My new dog (puppy who grew up with
Babadook (our pet name for the roboVac)) is almost entirely non-plussed
by the roboVac *except* when I drive it (by remote) to keep
following/confronting him then he wants to scare/play it. My new cat
(also 1 year old) also ignores Babadook even when *I* drive it... but
she *does* simply leave the room (the entire floor) probably because the
high pitched whine is irritating?
FWIW my dog *does* take interest/exception to things happening on our
40" TV screen with/without familiar sounds (e.g. barking) but his
interest is fleeting and can't be fooled by bringing back the same
visuals, including images/videos of himself. The cat walks across the
shelf in front of the TV noticing nothing more than perhaps the warmth
emanating from the box? I sit and stare at for hours, getting deeply
involved in the personal lives of the fictional ensemble characters who
show up faithfully week by week (or hour by hour if I'm binging)...
>
> arxiv.org <https://arxiv.org/pdf/2106.08417v3.pdf>
> <https://arxiv.org/pdf/2106.08417v3.pdf>
>
> <https://arxiv.org/pdf/2106.08417v3.pdf>
>
>> On May 29, 2023, at 11:01 AM, Russ Abbott <russ.abbott at gmail.com> wrote:
>>
>>
>> I saw that Rodney Brooks video. He claimed that transformer-based
>> software has no semantic content. I think that's an exaggeration. The
>> semantic content is defined by the token embeddings. So many of the
>> explanations of token embeddings overcomplicate the basic idea. Look
>> up wordToVec
>> <https://www.google.com/search?q=word-to-vec+&tbm=isch&ved=2ahUKEwju7IP4i5v_AhUEK0QIHbXrDycQ2-cCegQIABAA&oq=word-to-vec+&gs_lcp=CgNpbWcQAzIECCMQJzIECAAQHjIECAAQHlD0EVj0EWDpGWgAcAB4AIABWogBrwGSAQEymAEAoAEBqgELZ3dzLXdpei1pbWfAAQE&sclient=img&ei=S-Z0ZK6bDITWkPIPtde_uAI&bih=956&biw=1781&cs=0&rlz=1C1RXQR_enUS1008US1008#imgrc=aSLVbcgBsmbHKM> and
>> read some of the articles. Word-to-Vec was around before
>> transformers. But transformers are based on that idea. (One of the
>> keys to transformers is that the embedding space, including the
>> features themselves, is generated as part of the training.) The
>> embedding of all tokens in the GPT embedding space is the semantics.
>> It's amazing the extent to which that idea can be pushed and the
>> results LLMs produce!
>> _
>> _
>> -- Russ
>>
>>
>> On Mon, May 29, 2023 at 10:13 AM Jochen Fromm <jofr at cas-group.net> wrote:
>>
>> Yes, Rodney Brooks said something similar. He said "GPTs have no
>> understanding of the words they use, no way to connect those
>> words, those symbols, to the real world. A robot needs to be
>> connected to the real world and its commands need to be coherent
>> with the real world. Classically it is known as the 'symbol
>> grounding problem""
>> https://rodneybrooks.com/what-will-transformers-transform/
>>
>> One could argue that this form of connectedness and embeddedness
>> leads eventually to self-awareness. First physical embeddedness,
>> then social embeddedness and finally self-awareness
>>
>> 1. Physical Embeddedness:
>> Agents who are embedded in a physical world are aware of the
>> world and move in it. To be embedded they need to be embodied.
>> Embeddedness leads to a grounding and a unique point of view
>> https://iep.utm.edu/husspemb/
>>
>> 2. Social Embeddedness:
>> Agents who are embedded in a world of social actors are aware of
>> other agents in the world and interact with them
>> https://mitpress.mit.edu/9780262518581/the-new-science-of-the-mind/
>>
>> 3. Self-Awareness:
>> Agents who are embedded in two worlds, the physical world and the
>> world of language become aware that they are actors in a world of
>> social actors by creating a link between the embodiment in the
>> physical world (the body) and the embodiment in the world of
>> language (the name or I)
>>
>> -J.
>>
>>
>> -------- Original message --------
>> From: Russ Abbott <russ.abbott at gmail.com>
>> Date: 5/29/23 7:08 AM (GMT+01:00)
>> To: The Friday Morning Applied Complexity Coffee Group
>> <friam at redfish.com>
>> Subject: [FRIAM] A distinguishing feature of living entities
>>
>> While watching my two little dogs run around our house, it struck
>> me that a feature that distinguishes living from non-living
>> entities is the apparent effortlessness with which living ones
>> navigate the world. Imagine how difficult it would be to build a
>> robot that could navigate the world so effortlessly. To make the
>> comparison a bit simpler, imagine how difficult it would be to
>> build a robotic cockroach.
>>
>> When I asked ChatGPT whether anyone has built a robotic
>> cockroach, it came up with these examples. (I haven't checked to
>> see whether these are real projects.)
>>
>> *
>>
>> DASH: The Dynamic Autonomous Sprawled Hexapod (DASH)
>> robot, developed at the University of California,
>> Berkeley, was inspired by the rapid locomotion of
>> cockroaches. It has six legs and can move quickly on
>> various terrains using a simple control mechanism.
>>
>> *
>>
>> Harvard RoboBee: Although not specifically modeled after
>> a cockroach, the Harvard RoboBee project aims to develop
>> small, insect-like robots. These tiny flying robots are
>> inspired by the mechanics and flight capabilities of
>> insects and demonstrate similar agility and maneuverability.
>>
>> *
>>
>> iSprawl: The iSprawl robot, developed at the University
>> of California, Berkeley, was inspired by cockroaches'
>> ability to squeeze through small spaces. It uses a
>> compliant body design and six legs to navigate tight and
>> cluttered environments.
>>
>> *
>>
>> VelociRoACH: Developed at the University of California,
>> Berkeley, the VelociRoACH is a fast-running robot
>> designed to mimic the high-speed locomotion of
>> cockroaches. It utilizes a legged design and has
>> demonstrated impressive speed and agility.
>>
>> __These mainly explore locomotion. Besides locomotion,
>> cockroaches notice when someone enters an area where they are
>> exposed. They quickly scuttle off to some hiding place. How do
>> they sense the presence of a new being? How do they know where
>> the hiding places are? How do they know how to move in the right
>> direction? How do they know how to avoid small obstacles and
>> fires? Etc.
>>
>> One can argue that these capabilities are hard-wired in. But that
>> doesn't make it any easier. These are still capabilities they
>> have, that would be a challenge to build.
>>
>> I became amazed at how well-connected living entities are to
>> their environments. They quickly and easily extract and use
>> information from their environment that is important to their
>> survival.
>>
>> Man-made robots have nowhere near that level of embeddedness and
>> environmental integration.
>>
>> Was it Rodney Brooks who said that we should build that sort of
>> connectedness before worrying about building intelligence into
>> our robots? Today that struck me as an important insight.
>> __
>> -- Russ Abbott
>> Professor Emeritus, Computer Science
>> California State University, Los Angeles
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives: 5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>> 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoomhttps://bit.ly/virtualfriam
> to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIChttp://friam-comic.blogspot.com/
> archives: 5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021http://friam.383.s1.nabble.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230529/2836b712/attachment.html>
More information about the Friam
mailing list