Title
Communication-efficient Federated Learning via Quantized Clipped SGD
Abstract
Communication has been considered as a major bottleneck of Federated Learning (FL) in mobile edge networks since participating workers iteratively transmit gradients to and receive models from the server. Compression technology like quantization that reduces the communication overhead and hyperparameter optimization technology like Clipped Stochastic Gradient Descent (Clipped SGD) that accelerates the convergence are two orthogonal approaches to improve the performance of FL. However, the combination of them has been little studied. To fill this gap, we propose Quantized Clipped SGD (QCSGD) to achieve communication-efficient FL. The major challenge of the combination lies in that the gradient quantization essentially affects the adjusting policy of step size in Clipped SGD, resulting in the lack of convergence guarantee. Therefore, we establish the convergence rate of QCSGD via a thorough theoretical analysis and exhibit that QCSGD has a comparable convergence rate as SGD without compression. We also conduct extensive experiments on various machine learning models and datasets and show that QCSGD outperforms state-of-the-art methods.
Year
DOI
Venue
2021
10.1007/978-3-030-85928-2_44
WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2021, PT I
Keywords
DocType
Volume
Federated learning, Gradient quantization, Clipped gradient descent
Conference
12937
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Ninghui Jia100.34
Zhihao Qu2425.45
Baoliu Ye321232.11