Title
<sc>PermCNN</sc>: Energy-Efficient Convolutional Neural Network Hardware Architecture With Permuted Diagonal Structure
Abstract
AbstractIn the emerging artificial intelligence (AI) era, efficient hardware accelerator design for deep neural networks (DNNs) is very important to enable real-time energy-efficient DNN model deployment. To this end, various DNN model compression approaches and the corresponding hardware architectures have been intensively investigated. Recently, PermDNN, as a permuted diagonal structure-imposing model compression approach, was proposed with promising classification performance and hardware performance. However, the existing PermDNN hardware architecture is specifically designed for fully-connected (FC) layer-contained DNN models; while its support for convolutional (CONV) layer is missing. To fill this gap, this article proposes PermCNN, an energy-efficient hardware architecture for permuted diagonal structured convolutional neural networks (CNNs). By fully utilizing the strong structured sparsity in the trained models as well as dedicatedly leveraging the dynamic activation sparsity, PermCNN delivers very high hardware performance for inference tasks on CNN models. A design example with 28 nm CMOS technology shows that, compared the to state-of-the-art CNN accelerator, PermCNN achieves 3.74× and 3.11× improvement on area and energy efficiency, respectively, on AlexNet workload, and 17.49× and 14.22× improvement on area and energy efficiency, respectively, on VGG model. After including energy consumption incurred by DRAM access, PermCNN achieves 2.60× and 9.62× overall energy consumption improvement on AlexNet and VGG workloads, respectively.
Year
DOI
Venue
2021
10.1109/TC.2020.2981068
Periodicals
Keywords
DocType
Volume
Deep learning, model compression, hardware accelerator, convolutional neural network
Journal
70
Issue
ISSN
Citations 
2
0018-9340
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Chunhua Deng1187.45
Siyu Liao2418.73
Bo Yuan326228.64