Title
Black-box Adversarial Sample Generation Based on Differential Evolution
Abstract
Deep Neural Networks (DNNs) are being used in various daily tasks such as object detection, speech processing, and machine translation. However, it is known that DNNs suffer from robustness problems — perturbed inputs called adversarial samples leading to misbehaviors of DNNs. In this paper, we propose a black-box technique called Black-box Momentum Iterative Fast Gradient Sign Method (BMI-FGSM) to test the robustness of DNN models. The technique does not require any knowledge of the structure or weights of the target DNN. Compared to existing white-box testing techniques that require accessing model internal information such as gradients, our technique approximates gradients through Differential Evolution and uses approximated gradients to construct adversarial samples. Experimental results show that our technique can achieve 100% success in generating adversarial samples to trigger misclassification, and over 95% success in generating samples to trigger misclassification to a specific target output label. It also demonstrates better perturbation distance and better transferability. Compared to the state-of-the-art black-box technique, our technique is more efficient. Furthermore, we conduct testing on the commercial Aliyun API and successfully trigger its misbehavior within a limited number of queries, demonstrating the feasibility of real-world black-box attack.
Year
DOI
Venue
2020
10.1016/j.jss.2020.110767
Journal of Systems and Software
Keywords
DocType
Volume
Adversarial samples,Differential evolution,Black-box testing,Deep Neural Network
Journal
170
ISSN
Citations 
PageRank 
0164-1212
3
0.44
References 
Authors
0
4
Name
Order
Citations
PageRank
junyu lin183.26
Lei Xu212418.82
Yingqi Liu330.44
Xiangyu Zhang4425.14