Title
Det: Defending Against Adversarial Examples Via Decreasing Transferability
Abstract
Deep neural networks (DNNs) have made great progress in recent years. Unfortunately, DNNs are found to be vulnerable to adversarial examples that are injected with elaborately crafted perturbations. In this paper, we propose a defense method named DeT, which can (1) defend against adversarial examples generated by common attacks, and (2) correctly label adversarial examples with both small and large perturbations. DeT is a transferability-based defense method, which to the best of our knowledge is the first such attempt. Our experimental results demonstrate that DeT can work well under both black and gray box attacks. We hope that DeT will be a benchmark in the research community for measuring DNN attacks.
Year
DOI
Venue
2019
10.1007/978-3-030-37337-5_25
CYBERSPACE SAFETY AND SECURITY, PT I
Keywords
DocType
Volume
Deep learning, Adversarial examples, Transferability
Conference
11982
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Changjiang Li100.34
Haiqin Weng253.72
Shouling Ji361656.91
Tiberio Uricchio415115.93
Qinming He537141.53