Title
A compression strategy to accelerate LSTM meta-learning on FPGA
Abstract
Driven by edge computing, how to efficiently deploy the meta-learner LSTM in the resource constrained FPGA terminal equipment has become a big problem. This paper proposes a compression strategy based on LSTM meta-learning model, which combined the structured pruning of the weight matrix and the mixed precision quantization. The weight matrix was pruned into a sparse matrix, then the weight was quantified to reduce resource consumption. Finally, a LSTM meta-learning accelerator was designed based on the idea of hardware–software cooperation. Experiments show that compared with mainstream hardware platforms, the proposed accelerator achieves at least 50.14 times increase in energy efficiency.
Year
DOI
Venue
2022
10.1016/j.icte.2022.03.014
ICT Express
Keywords
DocType
Volume
Edge calculation,FPGA,LSTM Meta-Learning Accelerator,Structural pruning,Mixed precision quantization
Journal
8
Issue
ISSN
Citations 
3
2405-9595
0
PageRank 
References 
Authors
0.34
1
5
Name
Order
Citations
PageRank
NianYi Wang100.34
Jing Nie200.34
Jingbin Li301.01
Kang Wang400.34
ShunKang Ling500.34