Title
Minimizing Training Time of Distributed Machine Learning by Reducing Data Communication
Abstract
Due to the additive property of most machine learning objective functions, the training can be distributed to multiple machines. Distributed machine learning is an efficient way to deal with the rapid growth of data volume at the cost of extra inter-machine communication. One common implementation is the parameter server system which contains two types of nodes: worker nodes, which are used for ca...
Year
DOI
Venue
2021
10.1109/TNSE.2021.3073897
IEEE Transactions on Network Science and Engineering
Keywords
DocType
Volume
Servers,Training,Machine learning,Machine learning algorithms,Data models,Resource management,Distributed databases
Journal
8
Issue
ISSN
Citations 
2
2327-4697
1
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Yubin Duan154.47
Ning Wang2217.96
Jie Wu38307592.07