Title
Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.
Abstract
Recently, it has been shown that deep neural networks (DNN) are subject to attacks through adversarial samples. Adversarial samples are often crafted through adversarial perturbation, i.e., manipulating the original sample with minor modifications so that the DNN model labels the sample incorrectly. Given that it is almost impossible to train perfect DNN, adversarial samples are shown to be easy to generate. As DNN are increasingly used in safety-critical systems like autonomous cars, it is crucial to develop techniques for defending such attacks. Existing defense mechanisms which aim to make adversarial perturbation challenging have been shown to be ineffective. In this work, we propose an alternative approach. We first observe that adversarial samples are much more sensitive to perturbations than normal samples. That is, if we impose random perturbations on a normal and an adversarial sample respectively, there is a significant difference between the ratio of label change due to the perturbations. Observing this, we design a statistical adversary detection algorithm called nMutant (inspired by mutation testing from software engineering community). Our experiments show that nMutant effectively detects most of the adversarial samples generated by recently proposed attacking methods. Furthermore, we provide an error bound with certain statistical significance along with the detection.
Year
Venue
Field
2018
arXiv: Learning
Artificial intelligence,Adversary,Machine learning,Deep neural networks,Mathematics,Adversarial system
DocType
Volume
Citations 
Journal
abs/1805.05010
4
PageRank 
References 
Authors
0.41
23
4
Name
Order
Citations
PageRank
wang jingyi17216.19
Jun Sun21407120.35
Peixin Zhang3444.25
xinyu459030.19