Title
Research Progress and Challenges on Application-Driven Adversarial Examples: A Survey
Abstract
AbstractGreat progress has been made in deep learning over the past few years, which drives the deployment of deep learning–based applications into cyber-physical systems. But the lack of interpretability for deep learning models has led to potential security holes. Recent research has found that deep neural networks are vulnerable to well-designed input examples, called adversarial examples. Such examples are often too small to detect, but they completely fool deep learning models. In practice, adversarial attacks pose a serious threat to the success of deep learning. With the continuous development of deep learning applications, adversarial examples for different fields have also received attention. In this article, we summarize the methods of generating adversarial examples in computer vision, speech recognition, and natural language processing and study the applications of adversarial examples. We also explore emerging research and open problems.
Year
DOI
Venue
2021
10.1145/3470493
ACM Transactions on Cyber-Physical Systems
Keywords
DocType
Volume
Adversarial examples, adversarial attacks, application, computer vision, speech recognition, natural language processing
Journal
5
Issue
ISSN
Citations 
4
2378-962X
1
PageRank 
References 
Authors
0.35
0
5
Name
Order
Citations
PageRank
Wei Jiang134.13
Zhiyuan He221.73
Jinyu Zhan338.15
Weijia Pan410.35
Deepak Adhikari510.35