Title
A high-performance FPGA accelerator for sparse neural networks: work-in-progress
Abstract
Neural networks have been widely used in a large range of domains, researchers tune numbers of layrs, neurons and synapses to adapt various applications. As a consequence, computations and memory of neural networks models are both intensive. As large requirements of memory and computing resources, it is difficult to deploy neural networks on resource-limited platforms. Sparse neural networks, which prune redundant neurons and synapses, alleviate computation and memory pressure. However, conventional accelerators cannot benefit from the sparse feature. In this paper, we propose a high-performance FPGA accelerator for sparse neural networks which utilizes eliminate computations and storage space. This work compresses sparse weights and processes compressed data directly. Experimental results demonstrate that our accelerator will reduce 50% and 10% storage of convolutional and full-connected layers, and achieve 3x speedup of performance over an optimized conventional FPGA accelerator.
Year
DOI
Venue
2017
10.1145/3125501.3125510
CASES
Keywords
Field
DocType
high-performance FPGA accelerator,sparse neural networks,synapses,sparse weights,compressed data processing,full-connected layers,storage space,optimized conventional FPGA accelerator,memory pressure,redundant neurons
Computer science,Work in process,Parallel computing,Field-programmable gate array,Artificial intelligence,Deep learning,Artificial neural network,Computation,Speedup
Conference
ISBN
Citations 
PageRank 
978-1-4503-5184-3
0
0.34
References 
Authors
4
7
Name
Order
Citations
PageRank
Yuntao Lu100.68
Lei Gong212.47
Chongchong Xu374.63
Fan Sun463.27
Yiwei Zhang55212.65
Chao Wang637262.24
Xuehai Zhou755177.54