[FRIAM] Optimizing for maximal serendipity or how Alan Turing misdirected ALife

uǝlƃ ☣ gepropella at gmail.com
Thu May 28 12:14:57 EDT 2020


The additional power is to mislead someone into thinking an expression is about one thing, when it's really about another thing. I.e. in this context, it's a way to troll and "riff" off some arbitrary string you found in some other post. In some contexts, however, it's more serious. Conspiracy theories use metaphor liberally in order to *trick* suckers into thinking something that's simply not true.

On 5/28/20 9:08 AM, Marcus Daniels wrote:
> It seems to me like the value of metaphors fits into a sparse dictionary learning approach.   If you want to compress a picture of, say, the new Apple headquarters, it helps if one has seen a circle or a torus in some form, and can just refer to that.   It would also help to have seen pictures of trees and shrubs to tweak, and to have seen solar panels.   Some features will be unique, and simple atoms are needed to refine the image.  I'm skeptical that metaphor is the best enduring representation though.   After one has seen many circles and ovals (or conic sections), a parameterized (even dependent) type becomes evident. 


-- 
☣ uǝlƃ



More information about the Friam mailing list