Title
Loss-Privacy Tradeoff in Federated Edge Learning
Abstract
Federated learning has been recognized as a promising scheme to tackle the privacy issues in multi-access edge computing through periodically uploading machine learning (ML) model updates instead of the original user data to the edge server. However, there still remains privacy leakage in such federated edge learning (FEL) systems since the model updates accessed by the server can be utilized to recover the original data. In this paper, we consider a personalized differential privacy based FEL scheme to alleviate the privacy leakage by adding different noise perturbations to the model updates of each edge device. Note that the noise perturbations may degrade the ML model performance, which is depicted by the global loss function. It is thus necessary to achieve a loss-privacy tradeoff in FEL by determining the noise scales and the numbers of local model updates. To address this challenge, we first derive the convergence upper bound of the global loss function as well as the closed-form privacy leakage from an adversarial perspective. We then propose a distributed mechanism in which the choices of noise scales and numbers of local model updates are optimized, even when the server is unaware of the personalized privacy preferences of different edge devices. Extensive theoretical analysis and numerical evaluations demonstrate the effectiveness of our proposed method in terms of privacy preservation and the global loss of the learned model.
Year
DOI
Venue
2022
10.1109/JSTSP.2022.3161786
IEEE Journal of Selected Topics in Signal Processing
Keywords
DocType
Volume
Differential privacy,distributed optimization,federated learning,mechanism design,multi-access edge computing
Journal
16
Issue
ISSN
Citations 
3
1932-4553
0
PageRank 
References 
Authors
0.34
22
4
Name
Order
Citations
PageRank
T Liu100.34
Boya Di251844.66
B Wang300.34
L Song401.01