Title
Towards Differentially Private Text Representations
Abstract
Most deep learning frameworks require users to pool their local data or model updates to a trusted server to train or maintain a global model. The assumption of a trusted server who has access to user information is ill-suited in many applications. To tackle this problem, we develop a new deep learning framework under an untrusted server setting, which includes three modules: (1) embedding module, (2) randomization module, and (3) classifier module. For the randomization module, we propose a novel local differentially private (LDP) protocol to reduce the impact of privacy parameter ε on accuracy, and provide enhanced flexibility in choosing randomization probabilities for LDP. Analysis and experiments show that our framework delivers comparable or even better performance than the non-private framework and existing LDP protocols, demonstrating the advantages of our LDP protocol.
Year
DOI
Venue
2020
10.1145/3397271.3401260
SIGIR '20: The 43rd International ACM SIGIR conference on research and development in Information Retrieval Virtual Event China July, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-8016-4
0
PageRank 
References 
Authors
0.34
7
4
Name
Order
Citations
PageRank
Lingjuan Lyu1334.61
Yitong Li2204.39
Xuanli He3285.81
Tong Xiao413123.91