Title
Fast-Convergent Federated Learning With Adaptive Weighting
Abstract
Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server while keeping privacy-sensitive data locally. The non-independent-and-identically-distributed (non-IID) data samples across participating nodes slow model training and impose additional communication rounds for FL to converge. In this paper, we propose...
Year
DOI
Venue
2021
10.1109/TCCN.2021.3084406
IEEE Transactions on Cognitive Communications and Networking
Keywords
DocType
Volume
Data models,Training,Convergence,Adaptation models,Collaborative work,Distributed databases,Servers
Journal
7
Issue
ISSN
Citations 
4
2332-7731
3
PageRank 
References 
Authors
0.43
0
2
Name
Order
Citations
PageRank
Hongda Wu130.43
Ping Wang24153216.93