Title
Latency Estimation Tool And Investigation Of Neural Networks Inference On Mobile Gpu
Abstract
A lot of deep learning applications are desired to be run on mobile devices. Both accuracy and inference time are meaningful for a lot of them. While the number of FLOPs is usually used as a proxy for neural network latency, it may not be the best choice. In order to obtain a better approximation of latency, the research community uses lookup tables of all possible layers for the calculation of the inference on a mobile CPU. It requires only a small number of experiments. Unfortunately, on a mobile GPU, this method is not applicable in a straightforward way and shows low precision. In this work, we consider latency approximation on a mobile GPU as a data- and hardware-specific problem. Our main goal is to construct a convenient Latency Estimation Tool for Investigation (LETI) of neural network inference and building robust and accurate latency prediction models for each specific task. To achieve this goal, we make tools that provide a convenient way to conduct massive experiments on different target devices focusing on a mobile GPU. After evaluation of the dataset, one can train the regression model on experimental data and use it for future latency prediction and analysis. We experimentally demonstrate the applicability of such an approach on a subset of the popular NAS-Benchmark 101 dataset for two different mobile GPU.
Year
DOI
Venue
2021
10.3390/computers10080104
COMPUTERS
Keywords
DocType
Volume
latency, inference, mobile GPU, neural architecture search
Journal
10
Issue
ISSN
Citations 
8
2073-431X
1
PageRank 
References 
Authors
0.36
0
4
Name
Order
Citations
PageRank
Evgeny Ponomarev110.36
Sergey Matveev210.36
Ivan V. Oseledets330641.96
Valery Glukhov410.36