Title
Adversarial Vulnerability of Neural Networks Increases With Input Dimension.
Abstract
Over the past four years, neural networks have proven vulnerable to adversarial images: targeted but imperceptible image perturbations lead to drastically different predictions. We show that adversarial vulnerability increases with the gradients of the training objective when seen as a function of the inputs. For most current network architectures, we prove that the $ell_1$-norm of these gradients grows as the square root of the input-size. These nets therefore become increasingly vulnerable with growing image size. Over the course of our analysis we rediscover and generalize double-backpropagation, a technique that penalizes large gradients in the loss surface to reduce adversarial vulnerability and increase generalization performance. We show that this regularization-scheme is equivalent at first order to training with adversarial noise. Finally, we demonstrate that replacing strided by average-pooling layers decreases adversarial vulnerability. Our proofs rely on the networku0027s weight-distribution at initialization, but extensive experiments confirm their conclusions after training.
Year
Venue
Field
2018
arXiv: Machine Learning
Network architecture,Theoretical computer science,Mathematical proof,Artificial intelligence,Initialization,Square root,Artificial neural network,Image resolution,Mathematics,Machine learning,Vulnerability,Adversarial system
DocType
Volume
Citations 
Journal
abs/1802.01421
8
PageRank 
References 
Authors
0.50
9
5
Name
Order
Citations
PageRank
Carl-Johann Simon-Gabriel1373.61
yann ollivier2614.35
Bernhard Schölkopf3231203091.82
Léon Bottou4117541364.56
David Lopez-Paz525619.06