Title
Application of Deep Compression Technique in Spiking Neural Network Chip.
Abstract
In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula> ) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%.
Year
DOI
Venue
2020
10.1109/TBCAS.2019.2952714
IEEE transactions on biomedical circuits and systems
Keywords
DocType
Volume
Deep compression,network-on-chip,neuron,spiking neural network,synapse
Journal
14
Issue
ISSN
Citations 
2
1932-4545
2
PageRank 
References 
Authors
0.36
0
12
Name
Order
Citations
PageRank
Yang Liu12194188.81
T. P. Chen251.78
Qi Yu3145.87
Yang Liu421.71
Kun Qian520.36
S G Hu620.36
Kun An720.36
Sheng Xu850771.47
Xitong Zhan920.36
Jing Wang1041041.91
Rui Guo11165.95
Yuancong Wu1220.36