<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>REC -</p>
<p>Good find! <br>
</p>
<p>I am not closely following the development and results of GAN
work, but it seems like this kind of study explicates at least ONE
GOOD REASON for worrying about AI changing the nature of the world
as we know it (even if it isn't a precise existential threat).
Convolved with Carl's offering around "weaponizing complexity", it
feels more and more believable (recursion unintended) that the
wielders of strong AI/ML will have the upper hand in any tactical
and possibly strategic domain (warfare, public opinion, markets,
etc.). <br>
</p>
<p>I don't know how deeply technical the presumed
election-manipulation of 2016 (now 2020) is, but it *does* seem
like the work you reference here implies that with the information
venues/vectors like streaming video (TV, Movies, Clips, attendant
advertising) and social media (FB/Insta/Twit...) the understanding
and tools are already in place to significantly manipulate public
opinion. Based on my anecdotal experience about people's
*certainty*, this article is very on-point. And this doesn't
even reference the technology of "deep fakes". </p>
<p>- Steve<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 12/27/19 8:21 PM, Roger Critchlow
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAGayqotJMybzV=_UVZQo609YOP0uDvkksJjAD-cmW-dPNrZdTA@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">This talk was mentioned on hacker news this week
and inspired my babbling at Saveur this morning. <a
href="https://slideslive.com/38921495/how-to-know"
moz-do-not-send="true">https://slideslive.com/38921495/how-to-know</a>.
The talk was delivered at Neural IPS on December 9 and discusses
recent research on how people come to believe they know
something.
<div><br>
</div>
<div>This paper <a
href="https://www.mitpressjournals.org/doi/full/10.1162/opmi_a_00017"
moz-do-not-send="true">https://www.mitpressjournals.org/doi/full/10.1162/opmi_a_00017</a> describes
the Amazon Mechanical Turk experiment on people becoming
certain they understood the boolean rule they were being
taught by examples.</div>
<div><br>
</div>
<div>-- rec --</div>
<div><br>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe <a class="moz-txt-link-freetext" href="http://redfish.com/mailman/listinfo/friam_redfish.com">http://redfish.com/mailman/listinfo/friam_redfish.com</a>
archives back to 2003: <a class="moz-txt-link-freetext" href="http://friam.471366.n2.nabble.com/">http://friam.471366.n2.nabble.com/</a>
FRIAM-COMIC <a class="moz-txt-link-freetext" href="http://friam-comic.blogspot.com/">http://friam-comic.blogspot.com/</a> by Dr. Strangelove
</pre>
</blockquote>
</body>
</html>