Title
Robust Superpixel-Guided Attentional Adversarial Attack
Abstract
Deep Neural Networks are vulnerable to adversarial samples, which can fool classifiers by adding small perturbations onto the original image. Since the pioneering optimization-based adversarial attack method, many following methods have been proposed in the past several years. However most of these methods add perturbations in a \"pixel-wise\" and \"global\" way. Firstly, because of the contradiction between the local smoothness of natural images and the noisy property of these adversarial perturbations, this \"pixel-wise\" way makes these methods not robust to image processing based defense methods and steganalysis based detection methods. Secondly, we find adding perturbations to the background is less useful than to the salient object, thus the \"global\" way is also not optimal. Based on these two considerations, we propose the first robust superpixel-guided attentional adversarial attack method. Specifically, the adversarial perturbations are only added to the salient regions and guaranteed to be same within each superpixel. Through extensive experiments, we demonstrate our method can preserve the attack ability even in this highly constrained modification space. More importantly, compared to existing methods, it is significantly more robust to image processing based defense and steganalysis based detection.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.01291
CVPR
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
10
Name
Order
Citations
PageRank
X. Dong1338.20
Jiangfan Han200.34
Dongdong Chen35219.10
Jiayang Liu4145.95
Huanyu Bian511.71
Zehua Ma600.68
Hongsheng Li7151685.29
Xiaogang Wang89647386.70
Weiming Zhang963.81
Nenghai Yu102238183.33