Abstract | ||
---|---|---|
Deep convolutional neural networks (CNNs) have recently shown very high accuracy in a wide range of cognitive tasks, and due to this, they have received significant interest from the researchers. Given the high computational demands of CNNs, custom hardware accelerators are vital for boosting their performance. The high energy efficiency, computing capabilities and reconfigurability of FPGA make it a promising platform for hardware acceleration of CNNs. In this paper, we present a survey of techniques for implementing and optimizing CNN algorithms on FPGA. We organize the works in several categories to bring out their similarities and differences. This paper is expected to be useful for researchers in the area of artificial intelligence, hardware architecture and system design. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1007/s00521-018-3761-1 | Neural Computing and Applications |
Keywords | Field | DocType |
Deep learning, Neural network (NN), Convolutional NN (CNN), Binarized NN, Hardware architecture for machine learning, FPGA, Reconfigurable computing, Parallelization, Low power | Computer architecture,Reconfigurability,Convolutional neural network,Field-programmable gate array,Hardware acceleration,Artificial intelligence,Deep learning,Artificial neural network,Mathematics,Machine learning,Hardware architecture,Reconfigurable computing | Journal |
Volume | Issue | ISSN |
32 | 4 | 1433-3058 |
Citations | PageRank | References |
18 | 0.80 | 25 |
Authors | ||
1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Sparsh Mittal | 1 | 817 | 50.36 |