[FRIAM] Celeste Kidd - How to Know

uǝlƃ ☣ gepropella at gmail.com
Mon Dec 30 16:44:52 EST 2019


Ha!  "There's a fun sub-result, which is, if you have a very deviant concept ... if you have a very weirdo concept that other people don't share, you're actually much more likely to be aware that you have a deviant concept."

At least I *know* I'm a deviant.


On 12/29/19 8:43 AM, Roger Critchlow wrote:
> I thought she was arguing that very mechanisms that google, facebook, twitter, etc. are using right now to engage people's interest online are already engendering and entrenching all sorts of weird beliefs.  6-9 minutes of activated charcoal advocacy videos and you're probably certain that black smoothies are okay, maybe even good for you.  There are no neutral platforms, because the order in which content is presented is never neutral, and it is especially biased if its goal is to keep you clicking.  Whether this allows focused election manipulation seems dubious, but it does allow for thousands of bizarre theories to be injected into the public consciousness at low cost, and some of them even make money.  Hey, some of them, bizarre as they are, might turn out to be correct, not that the platforms have any interest in that aspect, because that wouldn't be neutral.
> [...]

> 
> On Sat, Dec 28, 2019 at 10:23 AM Steven A Smith <sasmyth at swcp.com <mailto:sasmyth at swcp.com>> wrote:
> 
>     REC -
> 
>     Good find!
> 
>     I am not closely following the development and results of GAN work, but it seems like this kind of study explicates at least ONE GOOD REASON for worrying about AI changing the nature of the world as we know it (even if it isn't a precise existential threat).   Convolved with Carl's offering around "weaponizing complexity", it feels more and more believable (recursion unintended) that the wielders of strong AI/ML will have the upper hand in any tactical and possibly strategic domain (warfare, public opinion, markets, etc.).   
> [...]
> 
>     On 12/27/19 8:21 PM, Roger Critchlow wrote:
>>     This talk was mentioned on hacker news this week and inspired my babbling at Saveur this morning.  https://slideslive.com/38921495/how-to-know.  The talk was delivered at Neural IPS on December 9 and discusses recent research on how people come to believe they know something.
>>
>>     This paper https://www.mitpressjournals.org/doi/full/10.1162/opmi_a_00017 describes the Amazon Mechanical Turk experiment on people becoming certain they understood the boolean rule they were being taught by examples.

-- 
☣ uǝlƃ



More information about the Friam mailing list