Title
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks.
Abstract
Modern neural networks are highly non-robust against adversarial manipulation. A significant amount of work has been invested in techniques to compute lower bounds on robustness through formal guarantees and to build provably robust models. However, it is still difficult to get guarantees for larger networks or robustness against larger perturbations. Thus attack strategies are needed to provide tight upper bounds on the actual robustness. We significantly improve the randomized gradient-free attack for ReLU networks (Croce and Hein in GCPR, 2018), in particular by scaling it up to large networks. We show that our attack achieves similar or significantly smaller robust accuracy than state-of-the-art attacks like PGD or the one of Carlini and Wagner, thus revealing an overestimation of the robustness by these state-of-the-art methods. Our attack is not based on a gradient descent scheme and in this sense gradient-free, which makes it less sensitive to the choice of hyperparameters as no careful selection of the stepsize is required.
Year
DOI
Venue
2019
10.1007/s11263-019-01213-0
International Journal of Computer Vision
Keywords
DocType
Volume
Adversarial attacks, Adversarial robustness, White-box attacks, Gradient-free attacks
Journal
128
Issue
ISSN
Citations 
4
0920-5691
1
PageRank 
References 
Authors
0.37
4
3
Name
Order
Citations
PageRank
Francesco Croce141.75
Jonas Rauber2885.14
Matthias Hein366362.80