Title
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization.
Abstract
We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both generalization and robustness towards adversarial attacks. In the CIFAR10 benchmark, we raise the accuracy of the Projected Gradient Descent adversarial training from $sim 46%$ to $sim 69%$ for ResNet20. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in accuracy by 38.9$%$ for ResNet56 under the strongest attack of the Iterative Fast Gradient Sign Method. We further provide an intuitive explanation of our defense by analyzing the geometry of the feature space. For reproducibility, the code is made available at url{this https URL}.
Year
Venue
Field
2018
arXiv: Learning
Feature vector,Mathematical optimization,Gradient descent,Activation function,Interpolation,Algorithm,Robustness (computer science),Total variation minimization,Artificial neural network,Mathematics,Adversarial system
DocType
Volume
Citations 
Journal
abs/1809.08516
2
PageRank 
References 
Authors
0.41
2
7
Name
Order
Citations
PageRank
bao wang1358.19
Alex Lin261.18
Zuoqiang Shi312118.35
Wei Zhu44812.13
Penghang Yin5609.03
Andrea L. Bertozzi648661.55
Stanley Osher77973514.62