Title
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference.
Abstract
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does not rely on specialized hardware, or computation-offloading that is often infeasible due to privacy concerns or high latency. However, it remains unclear how model compression techniques perform across a wide range of DNNs. To design efficient embedded deep learning solutions, we need to understand their behaviors. This work develops a quantitative approach to characterize model compression techniques on a representative embedded deep learning architecture, the NVIDIA Jetson Tx2. We perform extensive experiments by considering 11 influential neural network architectures from the image classification and the natural language processing domains. We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics. We demonstrate that there are opportunities to achieve fast deep inference on embedded systems, but one must carefully choose the compression settings. Our results provide insights on when and how to apply model compression techniques and guidelines for designing efficient embedded deep learning systems.
Year
DOI
Venue
2018
10.1109/BDCloud.2018.00110
ISPA/IUCC/BDCloud/SocialCom/SustainCom
Keywords
Field
DocType
Deep learning,embedd system,parallellsm,energy efficiency,deep inference
Deep inference,Inference,Computer science,Efficient energy use,Network architecture,Human–computer interaction,Artificial intelligence,Deep learning,Artificial neural network,Contextual image classification,Computer engineering,Energy consumption
Journal
Volume
ISSN
Citations 
abs/1810.08899
2158-9178
0
PageRank 
References 
Authors
0.34
0
9
Name
Order
Citations
PageRank
Qing Qin100.34
Jie Ren203.04
Jialong Yu300.34
Ling Gao474.52
Hai Wang502.03
Jie Zheng632.08
Yansong Feng773564.17
Jianbin Fang826525.31
Zheng Wang97910.37