Title
Assured Deep Learning: Practical Defense Against Adversarial Attacks
Abstract
Deep Learning (DL) models have been shown to be vulnerable to adversarial attacks. In light of the adversarial attacks, it is critical to reliably quantify the confidence of the prediction in a neural network to enable safe adoption of DL models in autonomous sensitive tasks (e.g., unmanned vehicles and drones). This article discusses recent research advances for unsupervised model assurance against the strongest adversarial attacks known to date and quantitatively compare their performance. Given the widespread usage of DL models, it is imperative to provide model assurance by carefully looking into the feature maps automatically learned within D1 models instead of looking back with regret when deep learning systems are compromised by adversaries.
Year
DOI
Venue
2018
10.1145/3240765.3274525
2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)
Keywords
Field
DocType
Adversarial Deep Learning,Unsupervised Model Assurance,Real-time Defense,Reconfigurable Computing
Regret,Computer security,Computer science,Real-time computing,Drone,Artificial intelligence,Deep learning,Artificial neural network,Adversarial system,Reconfigurable computing
Conference
ISSN
ISBN
Citations 
1933-7760
978-1-5386-7502-1
0
PageRank 
References 
Authors
0.34
7
5
Name
Order
Citations
PageRank
Bita Darvish Rouhani19913.53
Mohammad Samragh2387.01
Mojan Javaheripi3185.83
Tara Javidi480678.83
Farinaz Koushanfar53055268.84