Title
2<sup>n</sup>+1-valued SSS-Net: Uniform Shift, Channel Sparseness, and Channel Shuffle
Abstract
Convolutional neural networks (CNNs) are primarily a cascaded set of pattern recognition filters, which are trained by big data. It enables us to solve complex problems of computer vision applications. A conventional CNN requires numerous parameters (weights) and computations. In this study, we propose SSS-Net using uniform channel shift, weight sparseness, and channel shuffle operations. We prove that a conventional k × k kernel convolution can be divided into k <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> of channel shift operations and a point-wise (1 × 1) convolution. We develop a 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> +1-valued quantization with zero weight sparseness technique. We investigate a weight distribution for a post-training CNN, and almost all weights are close to zero. We eliminate such small weights to reduce the model size and non-zero weights are efficiently quantized by 2 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">n</sup> -valued one. We show an algorithm for a uniform shift with quantization. Since a uniform shift operation requires no multiplications, the amount of computation becomes zero. We train SSS (Shift, Sparseness, and Shuffle)-Net by ImageNet 2012 benchmark image. Compared with existing CNN models, our SSS-Net is a smaller model size and MAC operations, while it has considerable recognition accuracy.
Year
DOI
Venue
2020
10.1109/ISMVL49045.2020.000-5
2020 IEEE 50th International Symposium on Multiple-Valued Logic (ISMVL)
Keywords
DocType
ISSN
Deep Learning,CNN,Multi valued Logic
Conference
0195-623X
ISBN
Citations 
PageRank 
978-1-7281-5407-7
0
0.34
References 
Authors
0
1
Name
Order
Citations
PageRank
Hiroki Nakahara115537.34