Abstract | ||
---|---|---|
Differentially Private-SGD (DP-SGD) of Abadi et al. and its variations are the only known algorithms for private training of large scale neural networks. This algorithm requires computation of per-sample gradients norms which is extremely slow and memory intensive in practice. In this paper, we present a new framework to design differentially private optimizers called DP-SGD-JL and DP-Adam-JL. Our approach uses Johnson–Lindenstrauss (JL) projections to quickly approximate the per-sample gradient norms without exactly computing them, thus making the training time and memory requirements of our optimizers closer to that of their non-DP versions. Unlike previous attempts to make DP-SGD faster which work only on a subset of network architectures or use compiler techniques, we propose an algorithmic solution which works for any network in a black-box manner which is the main contribution of this paper. To illustrate this, on IMDb dataset, we train a Recurrent Neural Network (RNN) to achieve good privacy-vs-accuracy tradeoff, while being significantly faster than DP-SGD and with a similar memory footprint as non-private SGD. |
Year | Venue | DocType |
---|---|---|
2021 | Annual Conference on Neural Information Processing Systems | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Bu, Zhiqi | 1 | 1 | 1.37 |
Sivakanth Gopi | 2 | 0 | 2.03 |
Janardhan Kulkarni | 3 | 0 | 0.34 |
Yin Tat Lee | 4 | 396 | 36.67 |
Judy Hanwen Shen | 5 | 0 | 0.34 |
Uthaipon Tantipongpipat | 6 | 9 | 3.79 |