Title
A Hardware-Friendly Approach Towards Sparse Neural Networks Based on LFSR-Generated Pseudo-Random Sequences
Abstract
The increase in the number of edge devices has led to the emergence of edge computing where the computations are performed on the device. In recent years, deep neural networks (DNNs) have become the state-of-the-art method in a broad range of applications, from image recognition, to cognitive tasks to control. However, neural network models are typically large and computationally expensive and therefore not deployable on power and memory constrained edge devices. Sparsification techniques have been proposed to reduce the memory foot-print of neural network models. However, they typically lead to substantial hardware and memory overhead. In this article, we propose a hardware-aware pruning method using linear feedback shift register (LFSRs) to generate the locations of non-zero weights in real-time during inference. We call this LFSR-generated pseudorandom sequence based sparsity (LGPS) technique. We explore two different architectures for our hardware-friendly LGPS technique, based on (1) row/column indexing with LFSRs and (2) column-wise indexing with nested LFSRs, respectively. Using the proposed method, we present a total saving of energy and area up to 37.47% and 49.93% respectively and speed up of 1.53× w.r.t the baseline pruning method, for the VGG-16 network on down-sampled ImageNet.
Year
DOI
Venue
2021
10.1109/TCSI.2020.3037028
IEEE Transactions on Circuits and Systems I: Regular Papers
Keywords
DocType
Volume
Sparsity,sparse neural network,LFSR,DNN accelerator,linear feedback shift register
Journal
68
Issue
ISSN
Citations 
2
1549-8328
1
PageRank 
References 
Authors
0.38
0
5
Name
Order
Citations
PageRank
Foroozan Karimzadeh110.38
Ningyuan Cao2133.39
Brian Crafton332.18
Justin Romberg4615.01
Arijit Raychowdhury528448.04