Title
Model inversion attacks against collaborative inference
Abstract
The prevalence of deep learning has drawn attention to the privacy protection of sensitive data. Various privacy threats have been presented, where an adversary can steal model owners' private data. Meanwhile, countermeasures have also been introduced to achieve privacy-preserving deep learning. However, most studies only focused on data privacy during training, and ignored privacy during inference. In this paper, we devise a new set of attacks to compromise the inference data privacy in collaborative deep learning systems. Specifically, when a deep neural network and the corresponding inference task are split and distributed to different participants, one malicious participant can accurately recover an arbitrary input fed into this system, even if he has no access to other participants' data or computations, or to prediction APIs to query this system. We evaluate our attacks under different settings, models and datasets, to show their effectiveness and generalization. We also study the characteristics of deep learning models that make them susceptible to such inference privacy threats. This provides insights and guidelines to develop more privacy-preserving collaborative systems and algorithms.
Year
DOI
Venue
2019
10.1145/3359789.3359824
Proceedings of the 35th Annual Computer Security Applications Conference
Keywords
Field
DocType
deep neural network, distributed computation, model inversion attack
Model inversion,Computer science,Inference,Real-time computing,Artificial intelligence
Conference
ISBN
Citations 
PageRank 
978-1-4503-7628-0
9
0.45
References 
Authors
0
3
Name
Order
Citations
PageRank
Zecheng He1255.05
Tianwei Zhang2557.65
Ruby Lee32460261.28