Title
Iteratively Training Look-Up Tables for Network Quantization
Abstract
Operating deep neural networks (DNNs) on devices with limited resources requires the reduction of their memory as well as computational footprint. Popular reduction methods are network quantization or pruning, which either reduce the word length of the network parameters or remove weights from the network if they are not needed. In this article, we discuss a general framework for network reduction which we call Look-Up Table Quantization (LUT-Q). For each layer, we learn a value dictionary and an assignment matrix to represent the network weights. We propose a special solver which combines gradient descent and a one-step k-means update to learn both the value dictionaries and assignment matrices iteratively. This method is very flexible: by constraining the value dictionary, many different reduction problems such as non-uniform network quantization, training of multiplierless networks, network pruning, or simultaneous quantization and pruning can be implemented without changing the solver. This flexibility of the LUT-Q method allows us to use the same method to train networks for different hardware capabilities.
Year
DOI
Venue
2018
10.1109/JSTSP.2020.3005030
IEEE Journal of Selected Topics in Signal Processing
Keywords
DocType
Volume
Neural network compression,network quantization,look-up table quantization,weight tying,multiplier-less networks,multiplier-less batch normalization
Journal
14
Issue
ISSN
Citations 
4
1932-4553
2
PageRank 
References 
Authors
0.43
0
7
Name
Order
Citations
PageRank
Fabien Cardinaux127919.00
Stefan Uhlich2357.62
Kazuki Yoshiyama341.46
Javier Alonso García441.46
Stephen Tiedemann541.46
Thomas Kemp624630.93
Akio Nakamura76214.45