Title
Securing Input Data of Deep Learning Inference Systems via Partitioned Enclave Execution.
Abstract
Deep learning systems have been widely deployed as backend engines of artificial intelligence (AI) services for their approaching-human performance in cognitive tasks. However, end users always have some concerns about the confidentiality of their provisioned input data, even for those reputable AI service providers. Accidental disclosures of sensitive user data might unexpectedly happen due to security breaches, exploited vulnerabilities, neglect, or insiders. In this paper, we systematically investigate the potential information exposure in deep learning based AI inference systems. Based on our observation, we develop DeepEnclave, a privacy-enhancing system to mitigate sensitive information disclosure in deep learning inference pipelines. The key innovation is to partition deep learning models and leverage secure enclave techniques on cloud infrastructures to cryptographically protect the confidentiality and integrity of user inputs. We formulate the information exposure problem as a reconstruction privacy attack and quantify the adversaryu0027s capabilities with different attack strategies. Our comprehensive security analysis and performance measurement can act as a guideline for end users to determine their principle of partitioning deep neural networks, thus to achieve maximum privacy guarantee with acceptable performance overhead.
Year
Venue
Field
2018
arXiv: Cryptography and Security
End user,Inference,Computer security,Computer science,Service provider,Provisioning,Security analysis,Artificial intelligence,Deep learning,Information sensitivity,Cloud computing
DocType
Volume
Citations 
Journal
abs/1807.00969
2
PageRank 
References 
Authors
0.36
31
7
Name
Order
Citations
PageRank
Zhongshu Gu113510.84
Heqing Huang2354.16
Jialong Zhang315013.49
Dong Su4123.98
Ankita Lamba520.36
Dimitrios Pendarakis698981.72
Ian Molloy773338.81