Title
Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks
Abstract
Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-theart methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods. The code will be ready soon in https://github.com/ZHUANGHP/AccumulatedDecoupled-Learning.git.
Year
Venue
DocType
2021
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139
Conference
Volume
ISSN
Citations 
139
2640-3498
0
PageRank 
References 
Authors
0.34
6
6
Name
Order
Citations
PageRank
Huiping Zhuang100.34
Zhenyu Weng200.34
Fulin Luo3345.85
Kar-Ann Toh400.34
Haizhou Li53678334.61
Zhiping Lin683983.62