Title
Advocating for Multiple Defense Strategies against Adversarial Examples
Abstract
It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa. In this paper we conduct a geometrical analysis that validates this observation. Then, we provide a number of empirical insights to illustrate the effect of this phenomenon in practice. Then, we review some of the existing defense mechanism that attempts to defend against multiple attacks by mixing defense strategies. Thanks to our numerical experiments, we discuss the relevance of this method and state open questions for the adversarial examples community.
Year
DOI
Venue
2020
10.1007/978-3-030-65965-3_11
PKDD/ECML Workshops
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Alexandre Araujo132.05
Laurent Meunier221.71
Rafael Pinot332.05
Benjamin Négrevergne4355.44