Title
Robustness of Classifiers to Universal Perturbations: A Geometric Perspective
Abstract
Deep networks have recently been shown to be vulnerable to universal perturbations: there exist very small image-agnostic perturbations that cause most natural images to be misclassified by such classifiers. In this paper, we provide a quantitative analysis of the robustness of classifiers to universal perturbations, and draw a formal link between the robustness to universal perturbations, and the geometry of the decision boundary. Specifically, we establish theoretical bounds on the robustness of classifiers under two decision boundary models (flat and curved models). We show in particular that the robustness of deep networks to universal perturbations is driven by a key property of their curvature: there exist shared directions along which the decision boundary of deep networks is systematically positively curved. Under such conditions, we prove the existence of small universal perturbations. Our analysis further provides a novel geometric method for computing universal perturbations, in addition to explaining their properties.
Year
Venue
Field
2018
international conference on learning representations
Curvature,Geometric method,Pattern recognition,Computer science,Algorithm,Robustness (computer science),Artificial intelligence,Decision boundary,Perturbation (astronomy)
DocType
Citations 
PageRank 
Conference
5
0.38
References 
Authors
0
5
Name
Order
Citations
PageRank
Seyed-Mohsen Moosavi-Dezfooli162726.32
Alhussein Fawzi276636.80
Omar Fawzi37110.23
Pascal Frossard43015230.41
Stefano Soatto54967350.34