[FRIAM] Datasets as Experience

Jochen Fromm jofr at cas-group.net
Mon Feb 6 14:33:30 EST 2023


Oh Google has already created a "mixture of experts" architecture. Interesting.https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.htmlThe amount of data they use to train and implement large language models is mind-boggling. I am curious what Google and OpenAI will present this year. -J.
-------- Original message --------From: Jochen Fromm <jofr at cas-group.net> Date: 2/5/23  1:38 PM  (GMT+01:00) To: The Friday Morning Applied Complexity Coffee Group <friam at redfish.com> Subject: [FRIAM] Datasets as Experience Would a CV of a large language model contain all the datasets it has seen? As adaptive agents of our selfish genes we are all trained on slightly different datasets. A Spanish speaker is a person trained on a Spanish dataset. An Italian speaker is a trained on an Italian dataset, etc. Speakers of different languages are trained on different datasets, therefore the same sentence is easy for a native speaker but impossible to understand for those who do not know the language. Do all large language models need to be trained on the same datasets? Or could many large language models be combined to a society of mind as Marvin Minsky describes it in his book "The society of mind"? Now that they are able to understand language it seems to be possible that one large language model replies to the questions from another. And we would even be able to understand the conversations.-J.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://redfish.com/pipermail/friam_redfish.com/attachments/20230206/07f691ee/attachment.html>


More information about the Friam mailing list