Title
Fake Gradient: A Security and Privacy Protection Framework for DNN-based Image Classification
Abstract
ABSTRACTDeep neural networks (DNNs) have demonstrated phenomenal success in image classification applications and are widely adopted in multimedia internet of things (IoT) use cases, such as smart home systems. To compensate for the limited resources on the IoT devices, the computation-intensive image classification tasks are often offloaded to remote cloud services. However, the offloading-based image classification could pose significant security and privacy concerns to the user data and the DNN model, leading to effective adversarial attacks that compromise the classification accuracy. The existing defense methods either impact the original functionality or result in high computation or model re-training overhead. In this paper, we develop a novel defense approach, namely Fake Gradient, to protect the privacy of the data and defend against adversarial attacks based on encryption of the output. Fake Gradient can hide the real output information by generating fake classes and further mislead the adversarial perturbation generation based on fake gradient knowledge, which helps maintain a high classification accuracy on the perturbed data. Our evaluations using ImageNet and 7 popular DNN models indicate that Fake Gradient is effective in protecting the privacy and defending against adversarial attacks targeting image classification applications.
Year
DOI
Venue
2021
10.1145/3474085.3475685
International Multimedia Conference
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Xianglong Feng163.83
Yi Xie242.17
Mengmei Ye363.21
Zhongze Tang400.34
Bo Yuan526228.64
Wei Sheng631732.53