Title
Neural Network Training Techniques Regularize Optimization Trajectory: An Empirical Study
Abstract
Modern deep neural network (DNN) trainings utilize various training techniques, e.g., nonlinear activation functions, batch normalization, skip-connections, etc. Despite their effectiveness, it is still mysterious how they help accelerate DNN trainings in practice. In this paper, we provide an empirical study of the regularization effect of these training techniques on DNN optimization. Specifically, we find that the optimization trajectories of successful DNN trainings consistently obey a certain regularity principle that regularizes the model update direction to be aligned with the trajectory direction. Theoretically, we show that such a regularity principle leads to a convergence guarantee in nonconvex optimization and the convergence rate depends on a regularization parameter. Empirically, we find that DNN trainings that apply the training techniques achieve a fast convergence and obey the regularity principle with a large regularization parameter, implying that the model updates are well aligned with the trajectory. On the other hand, DNN trainings without the training techniques have slow convergence and obey the regularity principle with a small regularization parameter, implying that the model updates are not well aligned with the trajectory.
Year
DOI
Venue
2020
10.1109/BigData50022.2020.9378359
2020 IEEE International Conference on Big Data (Big Data)
Keywords
DocType
ISSN
Neural network,training techniques,nonconvex optimization,optimization trajectories,regularity principle
Conference
2639-1589
ISBN
Citations 
PageRank 
978-1-7281-6252-2
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Cheng Chen18910.30
Junjie Yang201.01
Yi Zhou36517.55