Title
All binarized convolutional neural network and its implementation on an FPGA
Abstract
A pre-trained convolutional neural network (CNN) is a feed-forward computation perspective, which is widely used in the embedded systems requiring highly power-and-area efficiency. This paper realizes a binarized CNN which treats only binarized values (+1/-1) for the weights and the activation value. In this case, the multiplier is replaced by an XNOR circuit instead of a dedicated DSP block. Binarization for both weights and activation are more suitable for hardware implementation. However, the first convolutional layer still calculates in integer precision, since the input value is 8 bit RGB pixel, and not binarized. In this paper, we decompose the input value into maps of which each pixel is in 1-bit precision. The proposed method enables a binarized CNN to use bitwise operation in all layers, and shares a binarized convolutional circuit among all convolutional layers. We call this all binarized CNN. We compared our proposal with conventional ones. Since all binarized CNN do not require a dedicated DSP block, our proposal is smaller and 1.2 times faster than the typical CNNs, and almost maintains baseline classification accuracy. In addition, pipelined all binarized CNN achieved 1840 FPS, consumed 0.3 watts and its accuracy was 82.8%.
Year
DOI
Venue
2017
10.1109/FPT.2017.8280163
2017 International Conference on Field Programmable Technology (ICFPT)
Keywords
Field
DocType
Deep Neural Network,Convolutional Neural Network,Binarized Convolutional Neural Network,FPGA
Digital signal processing,XNOR gate,Pattern recognition,Bitwise operation,Convolutional neural network,Computer science,Parallel computing,Field-programmable gate array,Multiplier (economics),RGB color model,Artificial intelligence,Pixel
Conference
ISBN
Citations 
PageRank 
978-1-5386-2657-3
3
0.72
References 
Authors
7
3
Name
Order
Citations
PageRank
Masayuki Shimoda186.45
Shimpei Sato24313.03
Hiroki Nakahara315537.34