Title
Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA.
Abstract
Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of  ~78% input similarity and  ~59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations. Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4× maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9× more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.
Year
Venue
DocType
2019
CoRR
Conference
Volume
ISBN
Citations 
abs/1810.02068
978-1-4503-6137-8
0
PageRank 
References 
Authors
0.34
8
5
Name
Order
Citations
PageRank
Cheng Fu1103.10
Shilin Zhu2133.28
Hao Su300.34
Ching-En Lee4215.86
Jishen Zhao563838.51