[FRIAM] LaVAN: Localized and Visible Adversarial Noise
uǝlƃ ☣
gepropella at gmail.com
Thu Jan 18 14:51:59 EST 2018
https://arxiv.org/abs/1801.02608
> Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.
--
☣ uǝlƃ
More information about the Friam
mailing list