Abstract | ||
---|---|---|
Recurrent neural networks (RNNs) based automatic speech recognition has nowadays become promising and important on mobile devices such as smart phones. However, previous RNN compression techniques either suffer from hardware performance overhead due to irregularity or significant accuracy loss due to the preserved regularity for hardware friendliness. In this work, we propose RTMobile that leverages both a novel block-based pruning approach and compiler optimizations to accelerate RNN inference on mobile devices. Our proposed RTMobile is the first work that can achieve real-time RNN inference on mobile platforms. Experimental results demonstrate that RTMobile can significantly outperform existing RNN hardware acceleration methods in terms of both inference accuracy and time. Compared with prior work on FPGA, RTMobile using Adreno 640 embedded GPU on GRU can improve the energy-efficiency by 40× while maintaining the same inference time. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/DAC18072.2020.9218499 | 2020 57th ACM/IEEE Design Automation Conference (DAC) |
Keywords | DocType | ISSN |
RNN,pruning,real-time acceleration,mobile | Conference | 0738-100X |
ISBN | Citations | PageRank |
978-1-7281-1085-1 | 2 | 0.36 |
References | Authors | |
0 | 10 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dong Peiyan | 1 | 4 | 3.12 |
Siyue Wang | 2 | 21 | 3.78 |
Wei Niu | 3 | 24 | 11.21 |
Chengming Zhang | 4 | 5 | 3.10 |
Sheng Lin | 5 | 139 | 14.39 |
Zhengang Li | 6 | 15 | 7.27 |
Yifan Gong | 7 | 1332 | 135.58 |
Bin Ren | 8 | 82 | 18.03 |
Xue Lin | 9 | 86 | 14.97 |
Dingwen Tao | 10 | 129 | 17.66 |