Title
IoT Device Friendly and Communication-Efficient Federated Learning via Joint Model Pruning and Quantization
Abstract
Federated learning (FL) through its novel applications and services has enhanced its presence as a promising tool in the Internet of Things (IoT) domain. Specifically, in a multiaccess edge computing setup with a host of IoT devices, FL is most suitable since it leverages distributed client data to train high-performance deep learning (DL) models while keeping the data private. However, the underlying deep neural networks (DNNs) are huge, preventing its direct deployment onto resource-constrained computing and memory-limited IoT devices. Besides, frequent exchange of model updates between the central server and clients in FL could result in a communication bottleneck. To address these challenges, in this article, we introduce GWEP, a model compression-based FL method. It utilizes joint quantization and model pruning to reap the benefits of DNNs while meeting the capabilities of resource-constrained devices. Consequently, by reducing the computational, memory, and network footprint of FL, the low-end IoT devices may be able to participate in the FL process. In addition, we provide theoretical guarantees of FL convergence. Through empirical evaluations, we demonstrate that our approach significantly outperforms the baseline algorithms by being up to 10.23 times faster with 11 times lesser communication rounds, while achieving high-model compression, energy efficiency, and learning performance.
Year
DOI
Venue
2022
10.1109/JIOT.2022.3145865
IEEE Internet of Things Journal
Keywords
DocType
Volume
Federated learning (FL),gradient compression,Internet of Things (IoT) devices,model pruning,quantization
Journal
9
Issue
ISSN
Citations 
15
2327-4662
0
PageRank 
References 
Authors
0.34
8
8
Name
Order
Citations
PageRank
Pavana Prakash100.68
Jiahao Ding2185.34
Rui Chen351.51
Xiaoqi Qin4216.47
Minglei Shu500.34
Qimei Cui664279.84
Yuanxiong Guo7605.90
Miao Pan85716.43