Abstract | ||
---|---|---|
Neural networks are vulnerable to adversarial examples, i.e. inputs that are imperceptibly perturbed from natural data and yet incorrectly classified by the network. Adversarial training, a heuristic form of robust optimization that alternates between minimization and maximization steps, has proven to be among the most successful methods to train networks that are robust against a pre-defined family of perturbations. This paper provides a partial answer to the success of adversarial training. When the inner maximization problem can be solved to optimality, we prove that adversarial training finds a network of small robust train loss. When the maximization problem is solved by a heuristic algorithm, we prove that adversarial training finds a network of small robust surrogate train loss. The analysis technique leverages recent work on the analysis of neural networks via Neural Tangent Kernel (NTK), combined with online-learning when the maximization is solved by a heuristic, and the expressiveness of the NTK kernel in the $\ell_\infty$-norm. |
Year | Venue | DocType |
---|---|---|
2019 | CoRR | Journal |
Volume | Citations | PageRank |
abs/1906.07916 | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ruiqi Gao | 1 | 21 | 9.35 |
Tianle Cai | 2 | 7 | 3.81 |
Haochuan Li | 3 | 31 | 1.71 |
Liwei Wang | 4 | 1272 | 88.14 |
Cho-Jui Hsieh | 5 | 5034 | 291.05 |
Lee, Jason D. | 6 | 711 | 48.29 |