[FRIAM] A new age of AI is dawning

glen gepropella at gmail.com
Thu Jan 12 17:47:44 EST 2023


This paper covers (nicely I think) the idea of embedding a LLM in a larger context, including error correcting against the world. It's not a slam dunk. It still might be the case that all *we* do is predict the next token. But I think the results around predictive processing indicate that even if that's what we're doing fundamentally, we do it in lower and higher orders ... something an LLM won't be able to do. We'd need a (large) visual model, a (large) enteroception model, maybe a (large) environment model, etc. Cue the metaphor-philes!

This article was interesting:

https://www.newyorker.com/magazine/2023/01/16/how-should-we-think-about-our-different-styles-of-thinking

I've doubted "learning styles" as any kind of scientifically justifiable thing. But I do admit to being "verbal" ... or what I call "algebraic" ... instead of "object" or "spatial". If LLMs can be safely chalked up as fundamentally sequential reasoners (that may simulate visual reasoning), then we're a tiny step closer to tests that could falsify AGI.

On 1/12/23 12:08, Jochen Fromm wrote:
> The buzz about chatGPT has apparently convinced Microsoft to invest $10 billion (!) in OpenAI. It looks like a new arms race between Google, Microsoft and Meta is emerging. Who will create the first self-aware AI by connecting such a large language model to the world?
> https://www.cnbc.com/2023/01/10/microsoft-to-invest-10-billion-in-chatgpt-creator-openai-report-says.html
> 
> It feels as if human-level AI is not that far away anymore now that machines have learned language. This NY Times article about large language models and ChatGPT is a bit older, but still good. As the article says "maybe predicting the next word is just part of what thinking is."
> https://www.nytimes.com/2022/04/15/magazine/ai-language.html


-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ



More information about the Friam mailing list