Title | ||
---|---|---|
In-memory area-efficient signal streaming processor design for binary neural networks |
Abstract | ||
---|---|---|
The expanding use of deep learning algorithms causes the demands for accelerating neural network (NN) signal processing. For the NN processing, in-memory computation is desired, in which expensive data transfer can be eliminated. In reflection of recently proposed binary neural networks (BNNs), which can reduce the computation resource and area requirements, we designed an in-memory BNN signal processor that densely stores binary weights in on-chip memories and can scale linearly with serial-parallel-serial signal stream. It achieved 3 and 71 times better per-power and per-area performance than an existing in-memory neuromorphic processor. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/MWSCAS.2017.8052874 | 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) |
Keywords | Field | DocType |
binary neural networks,deep learning algorithms,neural network signal processing,in-memory computation,in-memory BNN signal processor,on-chip memories,serial-parallel-serial signal stream,in-memory area-efficient signal streaming processor design | Signal processing,Data transmission,Digital signal processor,Computer science,Neuromorphic engineering,Electronic engineering,Time delay neural network,Artificial intelligence,Deep learning,Computer hardware,Artificial neural network,Parallel computing,Processor design | Conference |
ISBN | Citations | PageRank |
978-1-5090-6390-1 | 1 | 0.38 |
References | Authors | |
7 | 11 |
Name | Order | Citations | PageRank |
---|---|---|---|
Haruyoshi Yonekawa | 1 | 34 | 4.37 |
Shimpei Sato | 2 | 12 | 2.94 |
Hiroki Nakahara | 3 | 155 | 37.34 |
Kota Ando | 4 | 24 | 6.81 |
Kodai Ueyoshi | 5 | 22 | 3.84 |
Kazutoshi Hirose | 6 | 5 | 2.94 |
Kentaro Orimo | 7 | 16 | 1.57 |
Shinya Takamaeda-Yamazaki | 8 | 65 | 16.83 |
M. Ikebe | 9 | 47 | 13.63 |
Tetsuya Asai | 10 | 121 | 26.53 |
Masato Motomura | 11 | 91 | 27.81 |