Title
Rethinking Weight Decay for Efficient Neural Net work Pruning
Abstract
Introduced in the late 1980s for generalization purposes, pruning has now become a staple for compressing deep neural networks. Despite many innovations in recent decades, pruning approaches still face core issues that hinder their performance or scalability. Drawing inspiration from early work in the field, and especially the use of weight decay to achieve sparsity, we introduce Selective Weight Decay (SWD), which carries out efficient, continuous pruning throughout training. Our approach, theoretically grounded on Lagrangian smoothing, is versatile and can be applied to multiple tasks, networks, and pruning structures. We show that SWD compares favorably to state-of-the-art approaches, in terms of performance-to-parameters ratio, on the CIFAR-10, Cora, and ImageNet ILSVRC2012 datasets.
Year
DOI
Venue
2022
10.3390/jimaging8030064
JOURNAL OF IMAGING
Keywords
DocType
Volume
deep learning, neural network pruning, computer vision, convolutional neural networks
Journal
8
Issue
ISSN
Citations 
3
2313-433X
1
PageRank 
References 
Authors
0.41
2
6
Name
Order
Citations
PageRank
Hugo Tessier110.41
Vincent Gripon221027.16
Mathieu Léonardon310.41
Matthieu Arzel46915.10
Thomas Hannagan510.41
Bertrand David617214.45