Title
Estimating GPU memory consumption of deep learning models
Abstract
Deep learning (DL) has been increasingly adopted by a variety of software-intensive systems. Developers mainly use GPUs to accelerate the training, testing, and deployment of DL models. However, the GPU memory consumed by a DL model is often unknown to them before the DL job executes. Therefore, an improper choice of neural architecture or hyperparameters can cause such a job to run out of the limited GPU memory and fail. Our recent empirical study has found that many DL job failures are due to the exhaustion of GPU memory. This leads to a horrendous waste of computing resources and a significant reduction in development productivity. In this paper, we propose DNNMem, an accurate estimation tool for GPU memory consumption of DL models. DNNMem employs an analytic estimation approach to systematically calculate the memory consumption of both the computation graph and the DL framework runtime. We have evaluated DNNMem on 5 real-world representative models with different hyperparameters under 3 mainstream frameworks (TensorFlow, PyTorch, and MXNet). Our extensive experiments show that DNNMem is effective in estimating GPU memory consumption.
Year
DOI
Venue
2020
10.1145/3368089.3417050
ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering Virtual Event USA November, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7043-1
3
PageRank 
References 
Authors
0.42
14
7
Name
Order
Citations
PageRank
Yanjie Gao130.42
Yang Liu28320.95
Hongyu Zhang386450.03
Zhengxian Li430.42
Yonghao Zhu530.42
Haoxiang Lin61819.29
Mao Yang7297.41