Title | ||
---|---|---|
Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization. |
Abstract | ||
---|---|---|
We improve the robustness of deep neural nets to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both generalization and robustness towards adversarial attacks. In the CIFAR10 benchmark, we raise the accuracy of the Projected Gradient Descent adversarial training from $sim 46%$ to $sim 69%$ for ResNet20. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in accuracy by 38.9$%$ for ResNet56 under the strongest attack of the Iterative Fast Gradient Sign Method. We further provide an intuitive explanation of our defense by analyzing the geometry of the feature space. For reproducibility, the code is made available at url{this https URL}. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Learning | Feature vector,Mathematical optimization,Gradient descent,Activation function,Interpolation,Algorithm,Robustness (computer science),Total variation minimization,Artificial neural network,Mathematics,Adversarial system |
DocType | Volume | Citations |
Journal | abs/1809.08516 | 2 |
PageRank | References | Authors |
0.41 | 2 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
bao wang | 1 | 35 | 8.19 |
Alex Lin | 2 | 6 | 1.18 |
Zuoqiang Shi | 3 | 121 | 18.35 |
Wei Zhu | 4 | 48 | 12.13 |
Penghang Yin | 5 | 60 | 9.03 |
Andrea L. Bertozzi | 6 | 486 | 61.55 |
Stanley Osher | 7 | 7973 | 514.62 |