Abstract | ||
---|---|---|
In this paper, a reconfigurable and scalable spiking neural network processor, containing 192 neurons and 6144 synapses, is developed. By using deep compression technique in spiking neural network chip, the amount of physical synapses can be reduced to 1/16 of that needed in the original network, while the accuracy is maintained. This compression technique can greatly reduce the number of SRAMs inside the chip as well as the power consumption of the chip. This design achieves throughput per unit area of 1.1 GSOP/(
<inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\text{s}\!\cdot\!\text{mm}^2$</tex-math></inline-formula>
) at 1.2 V, and energy consumed per SOP of 35 pJ. A 2-layer fully-connected spiking neural network is mapped to the chip, and thus the chip is able to realize handwritten digit recognition on MNIST with an accuracy of 91.2%. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/TBCAS.2019.2952714 | IEEE transactions on biomedical circuits and systems |
Keywords | DocType | Volume |
Deep compression,network-on-chip,neuron,spiking neural network,synapse | Journal | 14 |
Issue | ISSN | Citations |
2 | 1932-4545 | 2 |
PageRank | References | Authors |
0.36 | 0 | 12 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yang Liu | 1 | 2194 | 188.81 |
T. P. Chen | 2 | 5 | 1.78 |
Qi Yu | 3 | 14 | 5.87 |
Yang Liu | 4 | 2 | 1.71 |
Kun Qian | 5 | 2 | 0.36 |
S G Hu | 6 | 2 | 0.36 |
Kun An | 7 | 2 | 0.36 |
Sheng Xu | 8 | 507 | 71.47 |
Xitong Zhan | 9 | 2 | 0.36 |
Jing Wang | 10 | 410 | 41.91 |
Rui Guo | 11 | 16 | 5.95 |
Yuancong Wu | 12 | 2 | 0.36 |