Title
A Batch Normalization Free Binarized Convolutional Deep Neural Network on an FPGA (Abstract Only).
Abstract
A pre-trained convolutional deep neural network (CNN) is a feed-forward computation perspective, which is widely used for the embedded systems, requires high power-and-area efficiency. This paper realizes a binarized CNN which treats only binary 2-values (+1/-1) for the inputs and the weights. In this case, the multiplier is replaced into an XNOR circuit instead of a dedicated DSP block. For hardware implementation, using binarized inputs and weights is more suitable. However, the binarized CNN requires the batch normalization techniques to retain the classification accuracy. In that case, the additional multiplication and addition require extra hardware, also, the memory access for its parameters reduces system performance. In this paper, we propose the batch normalization free CNN which is mathematically equivalent to the CNN using batch normalization. The proposed CNN treats the binarized inputs and weights with the integer bias. We implemented the VGG-16 benchmark CNN on the NetFPGA-SUME FPGA board, which has the Xilinx Inc. Virtex7 FPGA and three off-chip QDR II+ Synchronous SRAMs. Compared with the conventional FPGA realizations, although the classification error rate is 6.5% decayed, the performance is 2.82 times faster, the power efficiency is 1.76 times lower, and the area efficiency is 11.03 times smaller. Thus, our method is suitable for the embedded computer system.
Year
DOI
Venue
2017
10.1145/3020078.3021782
FPGA
Keywords
Field
DocType
Binarized Deep Neural Network,FPGA
Digital signal processing,Normalization (statistics),XNOR gate,Computer science,Parallel computing,Word error rate,Field-programmable gate array,Multiplier (economics),Real-time computing,Multiplication,Artificial neural network
Conference
Citations 
PageRank 
References 
4
0.43
0
Authors
4
Name
Order
Citations
PageRank
Hiroki Nakahara115537.34
Haruyoshi Yonekawa2344.37
Hisashi Iwamoto3112.28
Masato Motomura483.65