Title
Efficient Hardware Implementation of Incremental Learning and Inference on Chip
Abstract
In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip. To this end we propose an efficient hardware implementation of a recently introduced incremental learning procedure that achieves state-of-the-art performance by combining transfer learning with majority votes and quantization techniques. The proposed design is able to accommodate for both new examples and new classes directly on the chip. We detail the hardware implementation of the method (implemented on FPGA target) and show it requires limited resources while providing a significant acceleration compared to using a CPU.
Year
DOI
Venue
2019
10.1109/NEWCAS44328.2019.8961310
2019 17th IEEE International New Circuits and Systems Conference (NEWCAS)
Keywords
Field
DocType
deep learning,artificial neural networks,incremental,FPGA,hardware
Inference,Computer science,Transfer of learning,Field-programmable gate array,Chip,Artificial intelligence,Deep learning,Classifier (linguistics),Artificial neural network,Quantization (signal processing),Computer hardware
Conference
ISSN
ISBN
Citations 
2472-467X
978-1-7281-1032-5
1
PageRank 
References 
Authors
0.36
0
5
Name
Order
Citations
PageRank
Ghouthi B. Hacene110.36
Vincent Gripon221027.16
Nicolas Farrugia3214.16
Matthieu Arzel46915.10
Michel Jézéquel576984.23