Title
FedDNA: Federated Learning with Decoupled Normalization-Layer Aggregation for Non-IID Data
Abstract
In the federated learning paradigm, multiple mobile clients train their local models independently based on the datasets generated by edge devices, and the server aggregates the model parameters received from multiple clients to form a global model. Conventional methods aggregate gradient parameters and statistical parameters without distinction, which leads to large aggregation bias due to cross-model distribution covariate shift (CDCS), and results in severe performance drop for federated learning under non-IID data. In this paper, we propose a novel decoupled parameter aggregation method called FedDNA to deal with the performance issues caused by CDCS. With the proposed method, the gradient parameters are aggregated using the conventional federated averaging method, and the statistical parameters are aggregated with an importance weighting method to reduce the divergence between the local models and the central model to optimize collaboratively by an adversarial learning algorithm based on variational autoencoder (VAE). Extensive experiments based on various federated learning scenarios with four open datasets show that FedDNA achieves significant performance improvement compared to the state-of-the-art methods.
Year
DOI
Venue
2021
10.1007/978-3-030-86486-6_44
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES
Keywords
DocType
Volume
Federated learning, Deep learning, Machine learning
Conference
12975
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Jian-Hui Duan100.34
Wenzhong Li267655.27
Sanglu Lu31380144.07