Abstract | ||
---|---|---|
Deep learning has exhibited high accuracy and applicability in machine learning field recently, by consuming tremendous computational resources processing massive data. To improve the performance of deep learning, GPUs have been introduced to accelerate the training phase. The complex data processing infrastructure demands high-efficient collaboration among underlying hardware components, such as CPU, GPU, memory, and storage devices. Unfortunately, few work has presented a systematic analysis about the impact of hardware configurations on the overall performance of deep learning. In this paper, we aim to make an experimental study on a standalone system to evaluate how various hardware configurations affect the overall performance of deep learning. We conducted a series of experiments using varied configurations on storage devices, main memory, CPU, and GPU to observe the overall performance quantitatively. Based on analyzing these results, we found that the performance greatly relies on the hardware configurations. Specifically, the computation is still the primary bottleneck as double GPUs and triple GPUs shorten the execution time by 44% and 59% respectively. Besides, both CPU frequency and storage subsystem can significantly affect running time while the memory size has no obvious effect on the running time for training neural network models. We believe our experimental results can help shed light on further optimizing the performance of deep learning in computer systems. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/NAS.2017.8026843 | 2017 International Conference on Networking, Architecture, and Storage (NAS) |
Keywords | Field | DocType |
triple GPUs,double GPUs,CPU,main memory,storage devices,standalone system,complex data processing infrastructure,massive data processing,machine learning,hardware configurations,deep learning | Bottleneck,Computer science,Complex data type,Real-time computing,Execution time,Artificial intelligence,Deep learning,Computer hardware,Artificial neural network,Computation | Conference |
ISBN | Citations | PageRank |
978-1-5386-3487-5 | 1 | 0.40 |
References | Authors | |
8 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jingjun Li | 1 | 6 | 1.86 |
Chen Zhang | 2 | 112 | 41.68 |
Qiang Cao | 3 | 593 | 57.50 |
Chuanyi Qi | 4 | 1 | 0.40 |
Jianzhong Huang | 5 | 87 | 19.32 |
changsheng | 6 | 3 | 2.46 |