Title
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Abstract
Federated learning (FL) is an emerging technique for training machine learning models using geographically dispersed data collected by local entities. It includes local computation and synchronization steps. To reduce the communication overhead and improve the overall efficiency of FL, gradient sparsification (GS) can be applied, where instead of the full gradient, only a small subset of important elements of the gradient is communicated. Existing work on GS uses a fixed degree of gradient sparsity for i.i.d.-distributed data within a datacenter. In this paper, we consider adaptive degree of sparsity and non-i.i.d. local datasets. We first present a fairness-aware GS method which ensures that different clients provide a similar amount of updates. Then, with the goal of minimizing the overall training time, we propose a novel online learning formulation and algorithm for automatically determining the near-optimal communication and computation trade-off that is controlled by the degree of gradient sparsity. The online learning algorithm uses an estimated sign of the derivative of the objective function, which gives a regret bound that is asymptotically equal to the case where exact derivative is available. Experiments with real datasets confirm the benefits of our proposed approaches, showing up to 40% improvement in model accuracy for a finite training time.
Year
DOI
Venue
2020
10.1109/ICDCS47774.2020.00026
2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS)
Keywords
DocType
ISSN
Distributed machine learning,edge computing,federated learning,gradient sparsification,online learning
Conference
1063-6927
ISBN
Citations 
PageRank 
978-1-7281-7003-9
4
0.44
References 
Authors
0
3
Name
Order
Citations
PageRank
Pengchao Han1185.80
Shiqiang Wang255737.04
Kin K. Leung32463183.60