Title
Adversarial Attack? Don't Panic
Abstract
Deep learning is playing a more and more important role in our daily life and scientific research such as autonomous systems, intelligent life and data mining. However, numerous studies have showed that deep learning with superior performance on many tasks may suffer from subtle perturbations constructed by attacker purposely, called adversarial perturbations, which are imperceptible to human observers but completely effect deep neural network models. The emergence of adversarial attacks has led to questions about neural networks. Therefore, machine learning security and privacy are becoming an increasingly active research area. In this paper, we summarize the prevalent methods for the generating adversarial attacks according to three groups. We elaborated on their ideas and principles of generation. We further analyze the common limitations of these methods and implement statistical experiments of the last layer output on CleverHans to reveal that the detection of adversarial samples is not as difficult as it seems and can be achieved in some relatively simple manners.
Year
DOI
Venue
2018
10.1109/BIGCOM.2018.00021
2018 4th International Conference on Big Data Computing and Communications (BIGCOM)
Keywords
Field
DocType
deep learning, adversarial attacks, adversarial generation algorithms, easy detection
Panic,Iterative method,Computer science,Artificial intelligence,Autonomous system (Internet),Deep learning,Statistical classification,Artificial neural network,Machine learning,Scientific method,Adversarial system
Conference
ISBN
Citations 
PageRank 
978-1-5386-8022-3
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Feixia Min100.34
Xiaofeng Qiu201.69
fan wu377.88