Title
Toward Distributed, Global, Deep Learning Using IoT Devices
Abstract
Deep learning (DL) using large scale, high-quality IoT datasets can be computationally expensive. Utilizing such datasets to produce a problem-solving model within a reasonable time frame requires a scalable distributed training platform/system. We present a novel approach where to train one DL model on the hardware of thousands of mid-sized IoT devices across the world, rather than the use of GPU cluster available within a data center. We analyze the scalability and model convergence of the subsequently generated model, identify three bottlenecks that are: high computational operations, time consuming dataset loading I/O, and the slow exchange of model gradients. To highlight research challenges for globally distributed DL training and classification, we consider a case study from the video data processing domain. A need for a two-step deep compression method, which increases the training speed and scalability of DL training processing, is also outlined. Our initial experimental validation shows that the proposed method is able to improve the tolerance of the distributed training process to varying internet bandwidth, latency, and Quality of Service metrics.
Year
DOI
Venue
2021
10.1109/MIC.2021.3053711
IEEE Internet Computing
Keywords
DocType
Volume
DL training processing,distributed training process,global learning,deep learning,high-quality IoT datasets,problem-solving model,mid-sized IoT devices,GPU cluster,data center,classification,video data processing domain,deep compression method,Internet bandwidth,Quality of Service metrics
Journal
25
Issue
ISSN
Citations 
3
1089-7801
1
PageRank 
References 
Authors
0.34
0
9
Name
Order
Citations
PageRank
Bharath Sudharsan110.34
Pankesh Patel210.34
John G. Breslin320.70
Muhammad Intizar Ali424335.86
Karan Mitra510.68
Schahram Dustdar69347575.71
Omer Rana720.70
Prem Prakash Jayaraman823.14
Rajiv Ranjan94747267.72