Title
DeepMask: Masking DNN Models for robustness against adversarial samples.
Abstract
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepMask. By identifying and removing unnecessary features in a DNN model, DeepMask limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepMask is easy to implement and computationally efficient. Experimental results show that DeepMask can increase the performance of state-of-the-art DNN models against adversarial samples.
Year
Venue
Field
2017
arXiv: Learning
Masking (art),Computer science,Robustness (computer science),Artificial intelligence,Machine learning,Adversarial system
DocType
Volume
Citations 
Journal
abs/1702.06763
1
PageRank 
References 
Authors
0.37
0
3
Name
Order
Citations
PageRank
Ji Gao1198.29
Beilun Wang212.40
Qi, Yanjun368445.77