Title
HyperNetworks with statistical filtering for defending adversarial examples.
Abstract
Deep learning algorithms have been known to be vulnerable to adversarial perturbations in various tasks such as image classification. This problem was addressed by employing several defense methods for detection and rejection of particular types of attacks. However, training and manipulating networks according to particular defense schemes increases computational complexity of the learning algorithms. In this work, we propose a simple yet effective method to improve robustness of convolutional neural networks (CNNs) to adversarial attacks by using data dependent adaptive convolution kernels. To this end, we propose a new type of HyperNetwork in order to employ statistical properties of input data and features for computation of statistical adaptive maps. Then, we filter convolution weights of CNNs with the learned statistical maps to compute dynamic kernels. Thereby, weights and kernels are collectively optimized for learning of image classification models robust to adversarial attacks without employment of additional target detection and rejection algorithms. We empirically demonstrate that the proposed method enables CNNs to spontaneously defend against different types of attacks, e.g. attacks generated by Gaussian noise, fast gradient sign methods (Goodfellow et al., 2014) and a black-box attack(Narodytska u0026 Kasiviswanathan, 2016).
Year
Venue
Field
2017
arXiv: Computer Vision and Pattern Recognition
Computer science,Convolutional neural network,Convolution,Filter (signal processing),Robustness (computer science),Artificial intelligence,Deep learning,Contextual image classification,Gaussian noise,Machine learning,Computational complexity theory
DocType
Volume
Citations 
Journal
abs/1711.01791
5
PageRank 
References 
Authors
0.38
10
3
Name
Order
Citations
PageRank
Zhun Sun1123.49
Mete Ozay210614.50
Takayuki Okatani349250.10