Title
AdvCGAN: An Elastic and Covert Adversarial Examples Generating Framework
Abstract
Recently, a new methodology using generative adversarial network (GAN) has been proposed to produce adversarial examples, which breaks the limitations of the previous methods dependent on different norm-levels. It can efficiently generate perturbations for any instance once the generator is trained, arising from the learning to approximate the distribution of real instances. However, there are still two shortcomings for this category of GAN-based method: i) the predicted label in attacking stage totally depend on a fixed or randomly-chosen label in training stage, which cannot tackle the elasticity problem on how to elastically produce adversarial example with any arbitrarily-assigned label in targeted attack scene when the generator has finished training; and ii) it only considering the produced adversarial example is as close as the real instances, which cannot guarantee the generated adversarial example is visually indistinguishable from its corresponding original instance perceptually. The aboved two disadvantages make this kind of method lack of flexibility and covertness. To circumvent these two predicaments, we in this paper propose a simple and easy-to-use adversarial example generating framework AdvCGAN through training a conditional generative adversarial network under the co-consideration on the similarities in data distributions and the image labels between the adversarial examples and the original instances to be imperceptible to humans. Concretely, our proposed AdvCGAN trains the conditional GAN with both image data and label (normal and attack) information, by which the generator can utilizing the guidance of label information to appropriately produce the adversarial example with any specific label in attacking stage. Extensive experiments using the commonly used MNIST and CIFAR-10 datasets show that our proposed AdvCGAN significantly outperforms other methods in terms of multi-facet evaluation. The results exhibit that our AdvCGAN can elastically produce more realistic adversarial examples with any arbitrarily-assigned attack label and achieve higher attack accuracy, especially in targeted attack.
Year
DOI
Venue
2021
10.1109/IJCNN52387.2021.9533901
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
DocType
ISSN
Citations 
Conference
2161-4393
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Baoli Wang1203.63
Xinxin Fan2165.10
Quanliang Jing322.45
Haining Tan401.35
Jingping Bi57018.36