Abstract | ||
---|---|---|
Deep neural network occupies a large memory space during its training phase. Since a computing environment in future IoT devices is restricted, a more hardware-aware approach with a smaller energy and memory footprint must be considered. In this paper, we propose a novel training method of neural network to decrease the memory usage by optimizing the representation format of temporary data in the training phase. Most of gradient values in the training are likely to be around zero. Our approach employs the logarithmic quantization that express a numerical value logarithmically for reducing the bit width. We evaluate the proposed method in the points of memory footprint and prediction accuracy. The results show that the proposed method effectively reduces memory footprint by about 60% with a slight degradation of the prediction accuracy. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/CANDAR.2017.81 | 2017 Fifth International Symposium on Computing and Networking (CANDAR) |
Keywords | Field | DocType |
logarithmic compression,memory footprint reduction,neural network training,memory space,training phase,computing environment,hardware-aware approach,memory usage,logarithmic quantization,IoT devices,representation format,gradient values,prediction accuracy,deep neural network,smaller energy | Compression (physics),Computer science,Internet of Things,Server,Memory management,Logarithm,Memory footprint,Backpropagation,Artificial neural network,Computer engineering | Conference |
ISSN | ISBN | Citations |
2379-1888 | 978-1-5386-2088-5 | 0 |
PageRank | References | Authors |
0.34 | 2 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kazutoshi Hirose | 1 | 5 | 2.94 |
Ryota Uematsu | 2 | 1 | 1.79 |
Kota Ando | 3 | 24 | 6.81 |
Kentaro Orimo | 4 | 16 | 1.57 |
Kodai Ueyoshi | 5 | 3 | 1.65 |
M. Ikebe | 6 | 47 | 13.63 |
Tetsuya Asai | 7 | 121 | 26.53 |
Shinya Takamaeda-Yamazaki | 8 | 65 | 16.83 |
Masato Motomura | 9 | 91 | 27.81 |