Abstract | ||
---|---|---|
In this paper, we tackle the problem of incrementally learning a classifier, one example at a time, directly on chip. To this end we propose an efficient hardware implementation of a recently introduced incremental learning procedure that achieves state-of-the-art performance by combining transfer learning with majority votes and quantization techniques. The proposed design is able to accommodate for both new examples and new classes directly on the chip. We detail the hardware implementation of the method (implemented on FPGA target) and show it requires limited resources while providing a significant acceleration compared to using a CPU. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/NEWCAS44328.2019.8961310 | 2019 17th IEEE International New Circuits and Systems Conference (NEWCAS) |
Keywords | Field | DocType |
deep learning,artificial neural networks,incremental,FPGA,hardware | Inference,Computer science,Transfer of learning,Field-programmable gate array,Chip,Artificial intelligence,Deep learning,Classifier (linguistics),Artificial neural network,Quantization (signal processing),Computer hardware | Conference |
ISSN | ISBN | Citations |
2472-467X | 978-1-7281-1032-5 | 1 |
PageRank | References | Authors |
0.36 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ghouthi B. Hacene | 1 | 1 | 0.36 |
Vincent Gripon | 2 | 210 | 27.16 |
Nicolas Farrugia | 3 | 21 | 4.16 |
Matthieu Arzel | 4 | 69 | 15.10 |
Michel Jézéquel | 5 | 769 | 84.23 |