Title
Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction
Abstract
AbstractRecently, many studies have demonstrated deep neural network (DNN) classifiers can be fooled by the adversarial example, which is crafted via introducing some perturbations into an original sample. Accordingly, some powerful defense techniques were proposed. However, existing defense techniques often require modifying the target model or depend on the prior knowledge of attacks. In this paper, we propose a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models. We consider the perturbation to images as a kind of noise and introduce two classic image processing techniques, scalar quantization and smoothing spatial filter, to reduce its effect. The image entropy is employed as a metric to implement an adaptive noise reduction for different kinds of images. Consequently, the adversarial example can be effectively detected by comparing the classification results of a given sample and its denoised version, without referring to any prior knowledge of attacks. More than 20,000 adversarial examples against some state-of-the-art DNN models are used to evaluate the proposed method, which are crafted with different attack techniques. The experiments show that our detection method can achieve a high overall F1 score of 96.39 percent and certainly raises the bar for defense-aware attacks.
Year
DOI
Venue
2021
10.1109/TDSC.2018.2874243
Periodicals
Keywords
DocType
Volume
Adversarial example, deep neural network, detection
Journal
18
Issue
ISSN
Citations 
1
1545-5971
4
PageRank 
References 
Authors
0.40
0
6
Name
Order
Citations
PageRank
Liang Bin123954.58
Hongcheng Li2293.76
Miaoqiang Su3312.65
Xirong Li4119168.62
Wenchang Shi519824.17
Xiaofeng Wang62543161.68