Abstract | ||
---|---|---|
Deep network pruning is an effective method to reduce the storage and computation cost of deep neural networks when applying them to resource-limited devices. Among many pruning granularities, neuron level pruning will remove redundant neurons and filters in the model and result in thinner networks. In this paper, we propose a gradually global pruning scheme for neuron level pruning. In each pruning step, a small percent of neurons were selected and dropped across all layers in the model. We also propose a simple method to eliminate the biases in evaluating the importance of neurons to make the scheme feasible. Compared with layer-wise pruning scheme, our scheme avoid the difficulty in determining the redundancy in each layer and is more effective for deep networks. Our scheme would automatically find a thinner sub-network in original network under a given performance. |
Year | Venue | Keywords |
---|---|---|
2017 | 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | Artificial neural networks, Deep learning, Deep compression |
DocType | Volume | ISSN |
Conference | abs/1703.09916 | 1522-4880 |
Citations | PageRank | References |
0 | 0.34 | 15 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhengtao Wang | 1 | 78 | 3.40 |
Ce Zhu | 2 | 1473 | 117.79 |
Zhiqiang Xia | 3 | 0 | 0.34 |
Qi Guo | 4 | 43 | 12.11 |
Yipeng Liu | 5 | 43 | 5.93 |