Title
Defending against Universal Adversarial Patches by Clipping Feature Norms.
Abstract
Physical-world adversarial attacks based on universal adversarial patches have been proved to be able to mislead deep convolutional neural networks (CNNs), exposing the vulnerability of real-world visual classification systems based on CNNs. In this paper, we empirically reveal and mathematically explain that the universal adversarial patches usually lead to deep feature vectors with very large norms in popular CNNs. Inspired by this, we propose a simple yet effective defending approach using a new feature norm clipping (FNC) layer which is a differentiable module that can be flexibly inserted in different CNNs to adaptively suppress the generation of large norm deep feature vectors. FNC introduces no trainable parameter and only very low computational overhead. However, experiments on multiple datasets validate that it can effectively improve the robustness of different CNNs towards white-box patch attacks while maintaining a satisfactory recognition accuracy for clean samples.
Year
DOI
Venue
2021
10.1109/ICCV48922.2021.01612
ICCV
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Cheng Yu171.10
Jiansheng Chen227331.28
Youze Xue3132.54
Yuyang Liu400.34
Weitao Wan5152.22
Jiayu Bao661.09
Huimin Ma701.01