Title | ||
---|---|---|
Reducing Dynamic Power in Streaming CNN Hardware Accelerators by Exploiting Computational Redundancies |
Abstract | ||
---|---|---|
Convolutional neural networks (CNNs) have achieved tremendous successes in various application domains such as computer vision. However, current implementations are characterized by large memory requirements and accesses, which pose an impediment towards their deployment on low cost embedded devices with fast runtime requirements. Recently, FPGA based streaming CNN hardware accelerators have been reported for alleviating these memory bottlenecks. However, these implementations suffer from large number of convolution operations which incur high power consumption. In this paper, we investigate methods to exploit the redundancies in the activation layers in order to reduce the dynamic power. We propose a computationally efficient approximation method to reduce the overall convolution operations with marginal accuracy loss. Experimental results of our FPGA implementation based on image classification datasets show that the proposed method leads to considerable power savings. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/FPL.2019.00063 | 2019 29th International Conference on Field Programmable Logic and Applications (FPL) |
Keywords | Field | DocType |
Convolutional neural networks, FPGA, Streaming hardware accelerators, dynamic power reduction, approximate computing | Software deployment,Computer science,Convolution,Convolutional neural network,Field-programmable gate array,Exploit,Implementation,Dynamic demand,Computer hardware,Contextual image classification | Conference |
ISSN | ISBN | Citations |
1946-147X | 978-1-7281-4885-4 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Duvindu Piyasena | 1 | 1 | 2.04 |
Rukshan Wickramasinghe | 2 | 0 | 0.34 |
Debdeep Paul | 3 | 0 | 1.35 |
Siew Kei Lam | 4 | 102 | 36.26 |
Meiqing Wu | 5 | 27 | 9.61 |