Title
Occlumency: Privacy-preserving Remote Deep-learning Inference Using SGX
Abstract
Deep-learning (DL) is receiving huge attention as enabling techniques for emerging mobile and IoT applications. It is a common practice to conduct DNN model-based inference using cloud services due to their high computation and memory cost. However, such a cloud-offloaded inference raises serious privacy concerns. Malicious external attackers or untrustworthy internal administrators of clouds may leak highly sensitive and private data such as image, voice and textual data. In this paper, we propose Occlumency, a novel cloud-driven solution designed to protect user privacy without compromising the benefit of using powerful cloud resources. Occlumency leverages secure SGX enclave to preserve the confidentiality and the integrity of user data throughout the entire DL inference process. DL inference in SGX enclave, however, impose a severe performance degradation due to limited physical memory space and inefficient page swapping. We designed a suite of novel techniques to accelerate DL inference inside the enclave with a limited memory size and implemented Occlumency based on Caffe. Our experiment with various DNN models shows that Occlumency improves inference speed by 3.6x compared to the baseline DL inference in SGX and achieves a secure DL inference within 72% of latency overhead compared to inference in the native environment.
Year
DOI
Venue
2019
10.1145/3300061.3345447
MobiCom '19: The 25th Annual International Conference on Mobile Computing and Networking Los Cabos Mexico October, 2019
Keywords
DocType
ISBN
Mobile deep learning,privacy,trusted execution environment,cloud offloading
Conference
978-1-4503-6169-9
Citations 
PageRank 
References 
7
0.49
11
Authors
10
Name
Order
Citations
PageRank
Taegyeong Lee170.49
Junehwa Song272.86
Zhiqi Lin3141.77
Saumay Pushp4537.47
Caihua Li570.49
Yunxin Liu669454.18
Youngki Lee783270.33
Fengyuan Xu834325.52
Chenren Xu951336.00
L.-F. Zhang10283.81