Title
Precise Extraction of Deep Learning Models via Side-Channel Attacks on Edge/Endpoint Devices
Abstract
With growing popularity, deep learning (DL) models are becoming larger-scale, and only the companies with vast training datasets and immense computing power can manage their business serving such large models. Most of those DL models are proprietary to the companies who thus strive to keep their private models safe from the model extraction attack (MEA), whose aim is to steal the model by training surrogate models. Nowadays, companies are inclined to offload the models from central servers to edge/endpoint devices. As revealed in the latest studies, adversaries exploit this opportunity as new attack vectors to launch side-channel attack (SCA) on the device running victim model and obtain various pieces of the model information, such as the model architecture (MA) and image dimension (ID). Our work provides a comprehensive understanding of such a relationship for the first time and would benefit future MEA studies in both offensive and defensive sides in that they may learn which pieces of information exposed by SCA are more important than the others. Our analysis additionally reveals that by grasping the victim model information from SCA, MEA can get highly effective and successful even without any prior knowledge of the model. Finally, to evince the practicality of our analysis results, we empirically apply SCA, and subsequently, carry out MEA under realistic threat assumptions. The results show up to 5.8 times better performance than when the adversary has no model information about the victim model.
Year
DOI
Venue
2022
10.1007/978-3-031-17143-7_18
COMPUTER SECURITY - ESORICS 2022, PT III
Keywords
DocType
Volume
Privacy in deep learning models, Model extraction attack, Side-channel attack
Conference
13556
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Younghan Lee100.34
Sohee Jun200.34
Yungi Cho300.34
Woorim Han400.34
Hyungon Moon500.34
Yunheung Paek600.34