Title
Stealing Neural Networks via Timing Side Channels.
Abstract
Deep learning is gaining importance in many applications. However, neural networks face several security and privacy threats. This is particularly significant in the scenario where Cloud infrastructures deploy a service with neural network model at the back end. Here, an adversary can extract the neural network parameters, infer the regularization hyperparameter, identify if a data point was part of the training data, and generate effective transferable adversarial examples to evade classifiers. This paper shows how a neural network model is susceptible to timing side channel attack. In this paper, a black box neural network extraction attack is proposed by exploiting the timing side channels to infer the depth of the network. Although, constructing an equivalent architecture is a complex search problem, it is shown how the reinforcement learning with knowledge distillation can effectively reduce the search space to infer a target model. The proposed approach has been tested with VGG(Visual Geometry Group) architectures on CIFAR10 data set. It is observed that it is possible to reconstruct substitute models with test accuracy close to the target models and the proposed approach is scalable and independent of type of neural network architectures.
Year
Venue
DocType
2018
arXiv: Cryptography and Security
Journal
Volume
Citations 
PageRank 
abs/1812.11720
3
0.37
References 
Authors
26
4
Name
Order
Citations
PageRank
Vasisht Duddu131.04
Debasis Samanta222737.98
D. Vijay Rao330.70
Valentina Emilia Balas419537.08