Title
Supporting compressed-sparse activations and weights on SIMD-like accelerator for sparse convolutional neural networks.
Abstract
Sparsity is widely observed in convolutional neural networks by zeroing a large portion of both activations and weights without impairing the result. By keeping the data in a compressed-sparse format, the energy consumption could be considerably cut down due to less memory traffic. However, the wide SIMD-like MAC engine adopted in many CNN accelerators can not support the compressed input due to the data misalignment. In this work, a novel Dual Indexing Module (DIM) is proposed to efficiently handle the alignment issue where activations and weights are both kept in compressed-sparse format. The DIM is implemented in a representative SIMD-like CNN accelerator, and able to exploit both compressed-sparse activations and weights. The synthesis results with 40nm technology have shown that DIM can enhance up to 46% of energy consumption and 55.4% Energy-Delay-Product (EDP).
Year
DOI
Venue
2018
10.1109/ASPDAC.2018.8297290
ASP-DAC
Keywords
Field
DocType
representative SIMD-like CNN accelerator,compressed-sparse activations,DIM,energy consumption,SIMD-like accelerator,sparse convolutional neural networks,compressed-sparse format,wide SIMD-like MAC engine,CNN accelerators,Energy-Delay-Product,Dual Indexing Module
System on a chip,Convolutional neural network,Computer science,SIMD,Search engine indexing,Exploit,Real-time computing,Statistical model,Physical unclonable function,Computer engineering,Energy consumption
Conference
ISSN
ISBN
Citations 
2153-6961
978-1-4503-6007-4
0
PageRank 
References 
Authors
0.34
7
2
Name
Order
Citations
PageRank
Chien-Yu Lin120.71
Bo-Cheng Charles Lai217719.25