Abstract | ||
---|---|---|
To address the large model size and intensive computation requirement of deep neural networks (DNNs), weight pruning techniques have been proposed and generally fall into two categories, i.e., static regularization-based pruning and dynamic regularization-based pruning. However, the former method currently suffers either complex workloads or accuracy degradation, while the latter one takes a long ... |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/DAC18074.2021.9586152 | 2021 58th ACM/IEEE Design Automation Conference (DAC) |
Keywords | DocType | ISSN |
Degradation,Training,Deep learning,Design automation,Computational modeling,Optimization methods | Conference | 0738-100X |
ISBN | Citations | PageRank |
978-1-6654-3274-0 | 1 | 0.37 |
References | Authors | |
0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Tianyun Zhang | 1 | 31 | 6.42 |
Xiaolong Ma | 2 | 22 | 5.90 |
Zhan Zheng | 3 | 5 | 4.59 |
Shanglin Zhou | 4 | 1 | 0.37 |
Caiwen Ding | 5 | 142 | 26.52 |
Makan Fardad | 6 | 11 | 1.70 |
Yanzhi Wang | 7 | 1082 | 136.11 |