Title
Federated Pruning: Improving Neural Network Efficiency with Federated Learning
Abstract
Automatic Speech Recognition models require large amount of speech data for training, and the collection of such data often leads to privacy concerns. Federated learning has been widely used and is considered to be an effective decentralized technique by collaboratively learning a shared prediction model while keeping the data local on different clients devices. However, the limited computation and communication resources on clients devices present practical difficulties for large models. To overcome such challenges, we propose Federated Pruning to train a reduced model under the federated setting, while maintaining similar performance compared to the full model. Moreover, the vast amount of clients data can also be leveraged to improve the pruning results compared to centralized training. We explore different pruning schemes and provide empirical evidence of the effectiveness of our methods.
Year
DOI
Venue
2022
10.21437/INTERSPEECH.2022-10787
Conference of the International Speech Communication Association (INTERSPEECH)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Rongmei Lin163.46
Yonghui Xiao212010.00
Tien-Ju Yang301.01
Ding Zhao403.04
Li Xiong52335142.42
Giovanni Motta600.34
Françoise Beaufays762.82