Title
Speeding-up neuromorphic computation for neural networks: Structure optimization approach
Abstract
This work addresses a structure optimization of neuromorphic computing architectures. This enables to speed up the computation of fully connected neural network twice as fast as, theoretically, that of the existing architectures. Precisely, we propose a new neuromorphic computing architecture of mixing both dendritic and axonal-based neuromorphic cores in a way to totally eliminate the inherent non-zero waiting time between neuromorphic cores. In conjunction with the new architecture, we also propose a criteria of maximally utilizing computation units so that the resource overhead of total computation units can be minimized. We then extend the applicability of our proposed structure to the convolutional and recurrent neural networks. Through a set of experiments, we demonstrate the effectiveness (i.e., speed and area) of our proposed architecture: ∼2x speedup with no accuracy penalty on the neuromorphic computation; or accuracy improvement with no time penalty.
Year
DOI
Venue
2022
10.1016/j.vlsi.2021.09.001
Integration
Keywords
DocType
Volume
Neural network,Performance,Architecture,Optimization
Journal
82
ISSN
Citations 
PageRank 
0167-9260
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Heechun Park100.34
Taewhan Kim200.34