Title
Grouping Synchronous to Eliminate Stragglers with Edge Computing in Distributed Deep Learning
Abstract
With the development of artificial intelligence(AI) applications, a large number of data are generated from mobile or IoT devices at the edge of the network. Deep learning tasks are executed to obtain effective information in the user data. However, the edge nodes are heterogeneous and the network bandwidth is limited in this case, which will cause general distributed deep learning to be inefficie...
Year
DOI
Venue
2021
10.1109/ISPA-BDCloud-SocialCom-SustainCom52081.2021.00066
2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
Keywords
DocType
ISSN
distributed deep training,gradient compression,parameter server,gradient sparsification
Conference
2158-9178
ISBN
Citations 
PageRank 
978-1-6654-3574-1
0
0.34
References 
Authors
0
9
Name
Order
Citations
PageRank
Zhiyi Gui100.34
Yang Xiang22930212.67
Hao Yang300.68
Wei Li400.34
Lei Zhang500.34
Qi Qi621056.01
J. Wang747995.23
Haifeng Sun86827.77
Jianxin Liao945782.08