Title
Feature Denoising for Improving Adversarial Robustness
Abstract
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.
Year
DOI
Venue
2019
10.1109/CVPR.2019.00059
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
DocType
Volume
Deep Learning,Recognition: Detection,Categorization,Retrieval
Conference
abs/1812.03411
ISSN
ISBN
Citations 
1063-6919
978-1-7281-3294-5
43
PageRank 
References 
Authors
0.98
6
5
Name
Order
Citations
PageRank
Cihang Xie11489.36
Yu-Xin Wu21857.00
van der maaten376348.75
Alan L. Yuille4103391902.01
Kaiming He521469696.72