[FRIAM] AI possibilities

Stephen Guerin stephen.guerin at simtable.com
Fri Apr 7 12:54:39 EDT 2023


Good, now make a short poem in style of Emerson on those themes of love,
passion and urgency using the same objects

ChatGPT4:

In the garden of life's grand design, Where fire engines race and
intertwine, With urgency, they rush to quell the flame, Passion's
intensity, their purpose to tame.

Among the emerald leaves, a symbol of love, A crimson apple, a gift from
above, In life's sweet orchard, temptation awaits, The taste of passion,
our destiny dictates.

A rose, a beacon of fervor and desire, Its scarlet petals, like a burning
fire, Delicate, yet fierce, it captures our soul, Whispering the secrets of
love's tender role.

In this tapestry of fate, these colors align, The fire engine, the apple,
and the rose entwine, Embodying love, passion, and urgency's call, The ebb
and flow of life, connecting us all.




On Fri, Apr 7, 2023, 10:48 AM Stephen Guerin <stephen.guerin at simtable.com>
wrote:

> As I wrote the prompt, I was wondering how it might deal with the
> ambiquity of color for these objects.
>
> Prompt: What might a fire engine, an apple and a rose have in common
>
> ChatGPT4:
> A fire engine, an apple, and a rose might not appear to have much in
> common at first glance, as they belong to different categories—vehicles,
> fruits, and flowers, respectively. However, one possible commonality among
> them is their color: red. Fire engines are typically red, apples can be red
> (among other colors), and roses can also be red. The color red is often
> associated with strong emotions such as love, passion, and urgency, which
> may be another connection among these seemingly unrelated object
>
> On Fri, Apr 7, 2023, 10:15 AM Steve Smith <sasmyth at swcp.com> wrote:
>
>> I tripped over (in my Gnewsfeed) an article that seemed to speak more
>> clearly
>> <https://www.marktechpost.com/2023/04/06/8-potentially-surprising-things-to-know-about-large-language-models-llms/>
>> to some of my maunderings:
>>
>> 8 Potentially Surprising Things To Know About Large Language Models LLMs
>> <https://www.marktechpost.com/2023/04/06/8-potentially-surprising-things-to-know-about-large-language-models-llms/>
>>
>> And the paper it summarizes (with a similar title, more detail and
>> references):
>>
>>  8 Things to know about Large Language Models - Samuel R Bowman
>> <https://arxiv.org/pdf/2304.00612.pdf>
>>
>> And in particular this point made:
>>
>>
>>    1. *LLMs frequently acquire and employ external-world
>>    representations.*
>>
>>  More and more evidence suggests that LLMs build internal representations
>> of the world, allowing them to reason at an abstract level insensitive to
>> the specific language form of the text. The evidence for this phenomenon is
>> strongest in the largest and most recent models, so it should be
>> anticipated that it will grow more robust when systems are scaled up more.
>> Nevertheless, current LLMs need to do this more effectively and effectively.
>>
>> The following findings, based on a wide variety of experimental
>> techniques and theoretical models, support this assertion.
>>
>>    - The internal color representations of models are highly consistent
>>    with empirical findings on how humans perceive color.
>>    - Models can conclude the author’s knowledge and beliefs to foretell
>>    the document’s future course.
>>    - Stories are used to inform models, which then change their internal
>>    representations of the features and locations of the objects represented in
>>    the stories.
>>    - Sometimes, models can provide information on how to depict strange
>>    things on paper.
>>    - Many commonsense reasoning tests are passed by models, even ones
>>    like the Winograd Schema Challenge, that are made to have no textual hints
>>    to the answer.
>>
>> These findings counter the conventional wisdom that LLMs are merely
>> statistical next-word predictors and can’t generalize their learning or
>> reasoning beyond text.
>>
>> On 4/6/23 8:27 AM, Steve Smith wrote:
>>
>> I have been reading Jeff Hawkins' _1000 Brains_ which is roughly *his*
>> take on AI from the perspective of the Neuroscience *he* has been doing for
>> a few decades, including building models of the neocortex.
>>
>> What struck me strongly was how much *I* expect anything I'd want to call
>> artificial *consciousness* to engage in "co-munnication" in the strongest
>> sense.  Glen regularly admonishes us that "communication" may be an
>> illusion and something we don't actually *do* or maybe more to the the
>> point "it doesn't mean what we think it means"?
>>
>> So for all the parlor tricks I've enjoyed playing with chatGPT and DALL-E
>> and maybe even more spectacularly the myriad examples *others* have teased
>> out of those systems, I am always looking for what sort of "internal state"
>> these systems are exposing to me in their "utterances".   And by extension,
>> I am looking to see if it is in any way apprehending *me* through my
>> questions and prompts.
>>
>> Dialog with chatGPT feels pretty familiar to me, as if I'm conversing
>> with an unusually polite and cooperative polymath.   It is freeing to feel
>> I can ask "it" any question which I can formulate and can expect back a
>> pretty *straight* answer if not always one I was hoping for.  "It" seems
>> pretty insightful and usually picks up on the nuances of my questions.   As
>> often as not, I need to follow up with refined questions which channel the
>> answers away from the "mundane or obvious" but when I do, it rarely misses
>> a trick or is evasive or harps on something from it's own (apparent)
>> agenda.  It only does that when I ask it questions about it's own nature,
>> formulation, domain and then it just seems blunted as if it has a lawyer or
>> politician intercepting some of those questions and answering them for it.
>>
>> I have learned to "frame" my questions by first asking it to defer it's
>> response until I've given it some ... "framing" for the actual question.
>> Otherwise I go through the other series of steps where I have to re-ask the
>> same question with more and more context or ask a very long and convoluted
>> question.  At first it was a pleasure to be able to unlimber my
>> convoluted-question-generator and have it (not mis) understand me and even
>> not seem to "miss a trick".   As I learned to generate several framing
>> statements before asking my question, I have found that I *can* give it too
>> many constraints (apparently) such that it respects some/most of my framing
>> but then avoids or ignores other parts.  At that point I have to ask
>> follow-up, elaborating, contextualizing questions.
>>
>> I do not yet feel like I am actually seeing into chatGPT's soul or in any
>> way being seen by it.   That will be for a future generation I suspect.
>> Otherwise it is one hella "research assistant" and "spitball partner" on
>> most any topic I've considered that isn't too contemporary (training set
>> ended 2021?).
>>
>> - Steve
>>
>> On 4/4/23 5:54 PM, Prof David West wrote:
>>
>> Based on the flood of stories about ChatAI, it appears:
>>    - they can 'do' math and 'reason' scientificdally
>>    - they can generate essays, term papers, etc.
>>    - they can engage in convincing dialog/conversations
>>      - as "therapists"
>>      - as "girlfriends" (I haven't seen any stories about women falling
>> in love with their AI)
>>      - as kinksters
>>    - they can write code
>>
>> The writing code ability immediately made me wonder if, given a database
>> of music instead of text, they could write music?
>>
>> The dialog /conversation ability makes me wonder about more real-time
>> collaborative interaction, improv acting / comedy? Or, pair programming?
>> The real-time aspect is critical to my question, as I believe there is
>> something qualitatively different between two people doing improv or pair
>> programming than simply engaging in dialog. I think I could make a much
>> stronger argument in the case of improv music, especially jazz, but AIs
>> aren't doing that yet.
>>
>> davew
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>    1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230407/50a6dcc9/attachment.html>


More information about the Friam mailing list