[FRIAM] Magic Harry Potter mirrors or more?

Gillian Densmore gil.densmore at gmail.com
Tue Feb 28 21:10:48 EST 2023


This john oliver piece might either amus, and or mortify you.
https://www.youtube.com/watch?v=Sqa8Zo2XWc4&ab_channel=LastWeekTonight

On Tue, Feb 28, 2023 at 4:00 PM Gillian Densmore <gil.densmore at gmail.com>
wrote:

>
>
> On Tue, Feb 28, 2023 at 2:06 PM Jochen Fromm <jofr at cas-group.net> wrote:
>
>> The "Transformer" movies are like the "Resident evil" movies based on a
>> similar idea: we take a simple, almost primitive story such as "cars that
>> can transform into alien robots" or "a bloody fight against a zombie
>> apocalypse" and throw lots of money at it.
>>
>> But maybe deep learning and large language models are the same: we take a
>> simple idea (gradient descent learning for deep neural networks) and throw
>> lots of money (and data) at it. In this sense transformer is a perfect name
>> of the architecture, isn't it?
>>
>> -J.
>> 😁😍🖖👍🤔
>>
>> -------- Original message --------
>> From: Gillian Densmore <gil.densmore at gmail.com>
>> Date: 2/28/23 1:47 AM (GMT+01:00)
>> To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com>
>>
>> Subject: Re: [FRIAM] Magic Harry Potter mirrors or more?
>>
>> Transformer architecture works because it's cybertronian technology. And
>> is so advanced as to be almost magic.
>>
>> On Mon, Feb 27, 2023 at 3:51 PM Jochen Fromm <jofr at cas-group.net> wrote:
>>
>>> Terrence Sejnowski argues that the new AI super chatbots are like a
>>> magic Harry Potter mirror that tells the user what he wants to hear: "When
>>> people discover the mirror, it seems to provide truth and understanding.
>>> But it does not. It shows the deep-seated desires of anyone who stares into
>>> it". ChatGPT, LaMDA, LLaMA and other large language models would "take in
>>> our words and reflect them back to us".
>>>
>>> https://www.nytimes.com/2023/02/26/technology/ai-chatbot-information-truth.html
>>>
>>> It is true that large language models have absorbed unimaginably huge
>>> amount of texts, but what if our prefrontal cortex in the brain works in
>>> the same way?
>>>
>>> https://direct.mit.edu/neco/article/35/3/309/114731/Large-Language-Models-and-the-Reverse-Turing-Test
>>>
>>> I think it is possible that the "transformer" architecture is so
>>> successful because it is - like the cortical columns in the neocortex - a
>>> modular solution for the problem what comes next in an unpredictable world
>>> https://en.wikipedia.org/wiki/Cortical_column
>>>
>>> -J.
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> archives:  5/2017 thru present
>>> https://redfish.com/pipermail/friam_redfish.com/
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230228/30e3196f/attachment.html>


More information about the Friam mailing list