Title
Logarithmic Compression for Memory Footprint Reduction in Neural Network Training
Abstract
Deep neural network occupies a large memory space during its training phase. Since a computing environment in future IoT devices is restricted, a more hardware-aware approach with a smaller energy and memory footprint must be considered. In this paper, we propose a novel training method of neural network to decrease the memory usage by optimizing the representation format of temporary data in the training phase. Most of gradient values in the training are likely to be around zero. Our approach employs the logarithmic quantization that express a numerical value logarithmically for reducing the bit width. We evaluate the proposed method in the points of memory footprint and prediction accuracy. The results show that the proposed method effectively reduces memory footprint by about 60% with a slight degradation of the prediction accuracy.
Year
DOI
Venue
2017
10.1109/CANDAR.2017.81
2017 Fifth International Symposium on Computing and Networking (CANDAR)
Keywords
Field
DocType
logarithmic compression,memory footprint reduction,neural network training,memory space,training phase,computing environment,hardware-aware approach,memory usage,logarithmic quantization,IoT devices,representation format,gradient values,prediction accuracy,deep neural network,smaller energy
Compression (physics),Computer science,Internet of Things,Server,Memory management,Logarithm,Memory footprint,Backpropagation,Artificial neural network,Computer engineering
Conference
ISSN
ISBN
Citations 
2379-1888
978-1-5386-2088-5
0
PageRank 
References 
Authors
0.34
2
9
Name
Order
Citations
PageRank
Kazutoshi Hirose152.94
Ryota Uematsu211.79
Kota Ando3246.81
Kentaro Orimo4161.57
Kodai Ueyoshi531.65
M. Ikebe64713.63
Tetsuya Asai712126.53
Shinya Takamaeda-Yamazaki86516.83
Masato Motomura99127.81