Title
Evaluating And Analyzing The Energy Efficiency Of Cnn Inference On High-Performance Gpu
Abstract
Convolutional neural network (CNN) inference usually runs on high-performance graphic processing units (GPUs). Since GPU is a high power consumption unit, that makes the energy consumption increases sharply due to the deep learning tasks. The energy efficiency of CNN inference is not only related to the software and hardware configurations, but also closely related to the application requirements of inference tasks. However, it is not clear on GPUs at present. In this paper, we conduct a comprehensive study on the model-level and layer-level energy efficiency of popular CNN models. The results point out several opportunities for further optimization. We also analyze the parameter settings (i.e., batch size, dynamic voltage and frequency scaling) and propose a revenue model to allow an optimal trade-off between energy efficiency and latency. Compared with the default settings, the optimal settings can improve revenue by up to 15.31x. We obtain the following main findings: (i) GPUs do not exploit the parallelism from the model depth and small convolution kernels, resulting in low energy efficiency. (ii) Convolutional layers are the most energy-consuming CNN layers. However, due to the cache, the power consumption of all layers is relatively balanced. (iii) The energy efficiency of TensorRT is 1.53xthan that of TensorFlow.
Year
DOI
Venue
2021
10.1002/cpe.6064
CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE
Keywords
DocType
Volume
CNNs, energy efficiency, high-performance GPU, inference
Journal
33
Issue
ISSN
Citations 
6
1532-0626
1
PageRank 
References 
Authors
0.35
0
7
Name
Order
Citations
PageRank
Chunrong Yao131.76
Wantao Liu2738.29
Weiqing Tang310.35
Jinrong Guo472.55
Songlin Hu512630.82
Lu Yijun612.38
Wei Jiang711.03