Title
Edge-Based Communication Optimization for Distributed Federated Learning
Abstract
Federated learning can achieve distributed machine learning without sharing privacy and sensitive data of end devices. However, high concurrent access to cloud servers increases the transmission delay of model updates. Some local models may be unnecessary with an opposite gradient from the global model, thus incurring many additional communication costs. Existing work mainly focuses on reducing communication rounds or cleaning local defect data, and neither takes into account latency associated with high server concurrency. To this end, we study an edge-based communication optimization framework to reduce the number of end devices directly connected to the parameter server while avoiding uploading unnecessary local updates. Specifically, we cluster devices in the same network location and deploy mobile edge nodes in different network locations to serve as hubs for cloud and end devices communications, thereby avoiding the latency associated with high server concurrency. Meanwhile, we propose a method based on cosine similarity to filter out unnecessary models, thus avoiding unnecessary communication. Experimental results show that compared with traditional federated learning, the proposed scheme reduces the number of local updates by 60%, and the convergence speed of the evaluated model increases by 10.3%.
Year
DOI
Venue
2022
10.1109/TNSE.2021.3083263
IEEE Transactions on Network Science and Engineering
Keywords
DocType
Volume
Federated learning,Communication optimization,Mobile edge nodes,Model filtering,Clustering
Journal
9
Issue
ISSN
Citations 
4
2327-4697
1
PageRank 
References 
Authors
0.37
20
6
Name
Order
Citations
PageRank
Wang Tian11715.16
Ying Liu21719.81
Xiaolin Zheng330036.99
Hongning Dai462962.25
Weijia Jia52656221.35
Min Xie6126396.98