Title
Trained Rank Pruning for Efficient Deep Neural Networks
Abstract
To accelerate DNNs inference, low-rank approximation has been widely adopted because of its solid theoretical rationale and efficient implementations. Several previous works attempted to directly approximate a pre-trained model by low-rank decomposition; however, small approximation errors in parameters can ripple over a large prediction loss. Apparently, it is not optimal to separate low-rank app...
Year
DOI
Venue
2018
10.1109/EMC2-NIPS53020.2019.00011
2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing - NeurIPS Edition (EMC2-NIPS)
Keywords
DocType
Volume
low-rank,decomposition,acceleration,pruning
Journal
abs/1812.02402
Issue
ISBN
Citations 
1
978-1-6654-2418-9
2
PageRank 
References 
Authors
0.36
25
9
Name
Order
Citations
PageRank
Yuhui Xu1125.00
Yuxi Li28115.02
Shuai Zhang3456.63
Wei Wen435318.09
Botao Wang517177.07
Y Qi613019.75
Yiran Chen73344259.09
Weiyao Lin873268.05
Hongkai Xiong951282.84