Title
Differentially-Private "Draw and Discard" Machine Learning.
Abstract
In this work, we propose a novel framework for privacy-preserving client-distributed machine learning. It is motivated by the desire to achieve differential privacy guarantees in the local model of privacy in a way that satisfies all systems constraints using asynchronous client-server communication and provides attractive model learning properties. We call it Draw and Discard because it relies on random sampling of models for load distribution (scalability), which also provides additional server-side privacy protections and improved model quality through averaging. We present the mechanics of client and server components of Draw and Discard and demonstrate how the framework can be applied to learning Generalized Linear models. We then analyze the privacy guarantees provided by our approach against several types of adversaries and showcase experimental results that provide evidence for the frameworku0027s viability in practical deployments.
Year
Venue
Field
2018
arXiv: Cryptography and Security
Asynchronous communication,Differential privacy,Computer science,Theoretical computer science,Generalized linear model,Model quality,Sampling (statistics),Artificial intelligence,Machine learning,Model learning,Scalability
DocType
Volume
Citations 
Journal
abs/1807.04369
1
PageRank 
References 
Authors
0.37
16
7
Name
Order
Citations
PageRank
Vasyl Pihur1443.63
Aleksandra Korolova254329.11
Frederick Liu3152.44
Subhash Sankuratripati410.37
Moti Yung5120801152.41
Dachuan Huang610.37
Ruogu Zeng710.37