Title
Cascade^CNN: Pushing the Performance Limits of Quantisation in Convolutional Neural Networks
Abstract
This work presents CascadeCNN, an automated toolflow that pushes the quantisation limits of any given CNN model, aiming to perform high-throughput inference. A two-stage architecture tailored for any given CNN-FPGA pair is generated, consisting of a low-and high-precision unit in a cascade. A confidence evaluation unit is employed to identify misclassified cases from the excessively low-precision unit and forward them to the high-precision unit for re-processing. Experiments demonstrate that the proposed toolflow can achieve a performance boost up to 55% for VGG-16 and 48% for AlexNet over the baseline design for the same resource budget and accuracy, without the need of retraining the model or accessing the training data.
Year
DOI
Venue
2018
10.1109/FPL.2018.00034
2018 28th International Conference on Field Programmable Logic and Applications (FPL)
Keywords
DocType
Volume
CNN,Convolutional Neural Networks,Quantisation,Low precision,CascadeCNN,ImageNet,VGG 16,AlexNet,FPGA based Accelerator
Conference
abs/1807.05053
ISSN
ISBN
Citations 
1946-147X
978-1-5386-8518-1
0
PageRank 
References 
Authors
0.34
17
3
Name
Order
Citations
PageRank
Alexandros Kouris1222.83
Stylianos I. Venieris210612.98
Christos-Savvas Bouganis3377.60