Abstract | ||
---|---|---|
Weight pruning methods of deep neural networks (DNNs) have been demonstrated to achieve a good model pruning rate without loss of accuracy, thereby alleviating the significant computation/storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, in prior... |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TNNLS.2020.3045153 | IEEE Transactions on Neural Networks and Learning Systems |
Keywords | DocType | Volume |
Graphics processing units,Acceleration,Convex functions,Optimization,Quantization (signal),Degradation,Periodic structures | Journal | 33 |
Issue | ISSN | Citations |
5 | 2162-237X | 0 |
PageRank | References | Authors |
0.34 | 0 | 12 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tianyun Zhang | 1 | 31 | 6.42 |
Shaokai Ye | 2 | 38 | 6.53 |
Feng Xiaoyu | 3 | 12 | 4.68 |
Xiaolong Ma | 4 | 22 | 5.90 |
Kaiqi Zhang | 5 | 19 | 7.11 |
Zhengang Li | 6 | 15 | 7.27 |
Jian Tang | 7 | 1095 | 74.34 |
Sijia Liu | 8 | 181 | 42.37 |
Xue Lin | 9 | 2 | 2.74 |
Yongpan Liu | 10 | 1056 | 84.55 |
Makan Fardad | 11 | 547 | 41.98 |
Yanzhi Wang | 12 | 1082 | 136.11 |