Title
Performance Modeling of Distributed Deep Neural Networks.
Abstract
During the past decade, machine learning has become extremely popular and can be found in many aspects of our every day life. Nowayadays with explosion of data while rapid growth of computation capacity, Distributed Deep Neural Networks (DDNNs) which can improve their performance linearly with more computation resources, have become hot and trending. However, there has not been an in depth study of the performance of these systems, and how well they scale. In this paper we analyze CNTK, one of the most commonly used DDNNs, by first building a performance model and then evaluating the system two settings: a small cluster with all nodes in a single rack connected to a top of rack switch, and in large scale using Blue Waters with arbitary placement of nodes. Our main focus was the scalability of the system with respect to adding more nodes. Based on our results, this system has an excessive initialization overhead because of poor I/O utilization which dominates the whole execution time. Because of this, the system does not scale beyond a few nodes (4 in Blue Waters). Additionally, due to a single server-multiple worker design the server becomes a bottleneck after 16 nodes limiting the scalability of the CNTK.
Year
Venue
Field
2016
arXiv: Distributed, Parallel, and Cluster Computing
Bottleneck,Rack,Computer science,Real-time computing,Execution time,Initialization,Deep neural networks,Computation,Blue Waters,Distributed computing,Scalability
DocType
Volume
Citations 
Journal
abs/1612.00521
1
PageRank 
References 
Authors
0.36
0
4
Name
Order
Citations
PageRank
Sayed Hadi Hashemi121.40
Shadi A. Noghabi2144.63
William D. Gropp35547548.31
Roy Campbell45133573.61