Title
Defense-Net - Defend Against a Wide Range of Adversarial Attacks through Adversarial Detector.
Abstract
Recent studies have demonstrated that Deep Neural Networks(DNNs) are vulnerable to adversarial input perturbations: meticulously engineered slight perturbations can result in inappropriate categorization of valid images. Adversarial Training has been one of the successful defense approaches in recent times. In this work, we propose an alternative to adversarial training by training a separate model with adversarial examples instead of the original classifier. We train an adversarial detector network known as 'Defense-Net' with strong adversary while training the original classifier with only clean training data. We propose a new adversarial cross entropy loss function to train Defense-Net appropriately differentiate between different adversarial examples. Defense-Net solves three major concerns regarding the development of a successful adversarial defense method. First, our defense does not have clean data accuracy degradation in contrast to traditional adversarial training based defenses. Second, we demonstrate this resiliency with experiments on the MNIST and CIFAR-10 data sets, and show that the state-of-the-art accuracy under the most powerful known white-box attack was increased from 94.02% to 99.2% on MNIST, and 47% to 94.79% on CIFAR-10. Finally, unlike most recent defenses, our approach does not suffer from obfuscated gradient and can successfully defend strong BPDA,PGD,FCSM and C& W attacks.
Year
DOI
Venue
2019
10.1109/ISVLSI.2019.00067
IEEE Computer Society Annual Symposium on VLSI
Field
DocType
ISSN
Cross entropy,MNIST database,Computer science,Robustness (computer science),Artificial intelligence,Adversary,Obfuscation,Classifier (linguistics),Artificial neural network,Machine learning,Adversarial system
Conference
2159-3469
Citations 
PageRank 
References 
1
0.35
0
Authors
2
Name
Order
Citations
PageRank
Adnan Siraj Rakin1307.89
Deliang Fan237553.66