Title
Defending Adversarial Attacks by Correcting logits.
Abstract
Generating and eliminating adversarial examples has been an intriguing topic in the field of deep learning. While previous research verified that adversarial attacks are often fragile and can be defended via image-level processing, it remains unclear how high-level features are perturbed by such attacks. We investigate this issue from a new perspective, which purely relies on logits, the class scores before softmax, to detect and defend adversarial attacks. Our defender is a two-layer network trained on a mixed set of clean and perturbed logits, with the goal being recovering the original prediction. Upon a wide range of adversarial attacks, our simple approach shows promising results with relatively high accuracy in defense, and the defender can transfer across attackers with similar properties. More importantly, our defender can work in the scenarios that image data are unavailable, and enjoys high interpretability especially at the semantic level.
Year
Venue
DocType
2019
CoRR
Journal
Volume
Citations 
PageRank 
abs/1906.10973
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Yifeng Li193.33
Ling-Xi Xie242937.79
Ya Zhang3134091.72
Rui Zhang41107145.40
Yanfeng Wang5346.95
Qi Tian623627.45