Title
Sparse And Imperceivable Adversarial Attacks
Abstract
Neural networks have been proven to be vulnerable to a variety of adversarial attacks. From a safety perspective, highly sparse adversarial attacks are particularly dangerous. On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected. We propose a new black-box technique to craft adversarial examples aiming at minimizing l(0)-distance to the original image. Extensive experiments show that our attack is better or competitive to the state of the art. Moreover, we can integrate additional bounds on the componentwise perturbation. Allowing pixels to change only in region of high variation and avoiding changes along axis-aligned edges makes our adversarial examples almost non-perceivable. Moreover, we adapt the Projected Gradient Descent attack to the l(0)-norm integrating componentwise constraints. This allows us to do adversarial training to enhance the robustness of classifiers against sparse and imperceivable adversarial manipulations.
Year
DOI
Venue
2019
10.1109/ICCV.2019.00482
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)
Field
DocType
Volume
Computer vision,Computer science,Artificial intelligence,Adversarial system
Conference
2019
Issue
ISSN
Citations 
1
1550-5499
3
PageRank 
References 
Authors
0.37
0
2
Name
Order
Citations
PageRank
Francesco Croce141.75
Matthias Hein266362.80