Title
GGS: General Gradient Sparsification for Federated Learning in Edge Computing *
Abstract
Federated learning is an emerging concept that trains the machine learning models with the distributed datasets, without sending the raw data to the data center. But in an edge computing enviroment where the wireless network resource is constrained, the key problem of federated learning is the communication overhead for parameters synchronization, which wastes bandwidth, increases training time, and even impacts the model accuracy. Gradient sparsification has received increasing attention, which only updates significant gradients and accumulates insignificant gradients locally. However, how to preserve the accuracy after a high ratio sparsification has always been ignored. In this paper, a General Gradient Sparsification (GGS) framework is proposed for adaptive optimizers, to correct the sparse gradient update process. It consists of two important mechanisms: gradient correction and batch normalization update with local gradients (BN-LG). With gradient correction, the optimizer can properly treat the accumulated insignificant gradients, which makes the model converge better. Furthermore, updating the batch normalization layer with local gradients can relieve the impact of delayed gradients without increasing the communication overhead. We have conducted experiments on LeNet-5, CifarNet, DenseNet-121, and AlexNet with adaptive optimizers. Results show that when 99.9% gradients are sparsified, validation datasets are maintained with top-l accuracy.
Year
DOI
Venue
2020
10.1109/ICC40277.2020.9148987
ICC 2020 - 2020 IEEE International Conference on Communications (ICC)
Keywords
DocType
ISSN
Training,Edge computing,Computational modeling,Servers,Convergence,Machine learning,Adaptation models
Conference
1550-3607
ISBN
Citations 
PageRank 
978-1-7281-5089-5
3
0.37
References 
Authors
0
6
Name
Order
Citations
PageRank
Shiqi Li190.76
Qi Qi221056.01
J. Wang347995.23
Haifeng Sun46827.77
Yujian Li530.37
Fei Yu65116335.58