Title
Federated Learning with Communication Delay in Edge Networks
Abstract
Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks. This work addresses an important consideration of federated learning at the network edge: communication delays between the edge nodes and the aggregator. A technique called FedDelAvg (federated delayed averaging) is developed, which generalizes the standard federated averaging algorithm to incorporate a weighting between the current local model and the delayed global model received at each device during the synchronization step. Through theoretical analysis, an upper bound is derived on the global model loss achieved by FedDelAvg, which reveals a strong dependency of learning performance on the values of the weighting and learning rate. Experimental results on a popular ML task indicate significant improvements in terms of convergence speed when optimizing the weighting scheme to account for delays.
Year
DOI
Venue
2020
10.1109/GLOBECOM42002.2020.9322592
GLOBECOM 2020 - 2020 IEEE Global Communications Conference
Keywords
DocType
ISSN
Federated learning,edge intelligence,distributed machine learning,convergence analysis,edge-cloud computing
Conference
1930-529X
ISBN
Citations 
PageRank 
978-1-7281-8299-5
1
0.37
References 
Authors
0
3
Name
Order
Citations
PageRank
Frank Po-Chen Lin121.42
Christopher G. Brinton211815.23
Nicolò Michelusi342735.43