Title
On Norm-Agnostic Robustness of Adversarial Training.
Abstract
Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in $\ell_2$ and $\ell_\infty$ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.
Year
Venue
DocType
2019
arXiv: Learning
Journal
Volume
Citations 
PageRank 
abs/1905.06455
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Bai Li1102.82
Changyou Chen236536.95
Wenlin Wang3517.06
L. Carin44603339.36