Title
Generalizable Adversarial Training via Spectral Normalization.
Abstract
Deep neural networks (DNNs) have set benchmarks on a wide array of supervised learning tasks. Trained DNNs, however, often lack robustness to minor adversarial perturbations to the input, which undermines their true practicality. Recent works have increased the robustness of DNNs by fitting networks using adversarially-perturbed training samples, but the improved performance can still be far below the performance seen in non-adversarial settings. A significant portion of this gap can be attributed to the decrease in generalization performance due to adversarial training. In this work, we extend the notion of margin loss to adversarial settings and bound the generalization error for DNNs trained under several well-known gradient-based attack schemes, motivating an effective regularization scheme based on spectral normalization of the DNNu0027s weight matrices. We also provide a computationally-efficient method for normalizing the spectral norm of convolutional layers with arbitrary stride and padding schemes in deep convolutional networks. We evaluate the power of spectral normalization extensively on combinations of datasets, network architectures, and adversarial training schemes. The code is available at this https URL.
Year
Venue
Field
2018
international conference on learning representations
Normalization (statistics),Network architecture,Supervised learning,Matrix norm,Robustness (computer science),Regularization (mathematics),Artificial intelligence,Padding,Machine learning,Mathematics,Adversarial system
DocType
Volume
Citations 
Journal
abs/1811.07457
5
PageRank 
References 
Authors
0.40
0
3
Name
Order
Citations
PageRank
Farzan Farnia1536.45
Jesse Zhang2104.58
David Tse3101821122.05