Title
Universal Adversarial Perturbations
Abstract
Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Year
DOI
Venue
2017
10.1109/CVPR.2017.17
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
DocType
Volume
universal adversarial perturbations,deep neural network classifier,image-agnostic,very small perturbation vector,natural images,universal perturbations,deep neural networks
Conference
abs/1610.08401
Issue
ISSN
ISBN
1
1063-6919
978-1-5386-0458-8
Citations 
PageRank 
References 
166
5.49
16
Authors
4
Search Limit
100166
Name
Order
Citations
PageRank
Seyed-Mohsen Moosavi-Dezfooli162726.32
Alhussein Fawzi276636.80
Omar Fawzi326117.96
Pascal Frossard43015230.41