Title
INERTIAL PROXIMAL DEEP LEARNING ALTERNATING MINIMIZATION FOR EFFICIENT NEUTRAL NETWORK TRAINING
Abstract
In recent years, the Deep Learning Alternating Minimization (DLAM), which is actually the alternating minimization applied to the penalty form of the deep neutral networks training, has been developed as an alternative algorithm to overcome several drawbacks of Stochastic Gradient Descent (SGD) algorithms. This work develops an improved DLAM by the well-known inertial technique, namely iPDLAM, which predicts a point by linearization of current and last iterates. To obtain further training speed, we apply a warm-up technique to the penalty parameter, that is, starting with a small initial one and increasing it in the iterations. Numerical results on real-world datasets are reported to demonstrate the efficiency of our proposed algorithm.
Year
DOI
Venue
2021
10.1109/ICASSP39728.2021.9413500
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)
Keywords
DocType
Citations 
Nonconvex alternating minimization, Penalty, Inertial method, Network training
Conference
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Lin-Bo Qiao12310.80
Tao Sun2329.03
Hengyue Pan383.84
Dongsheng Li429960.22