Title
A Walk with SGD.
Abstract
Exploring why stochastic gradient descent (SGD) based optimization methods train deep neural networks (DNNs) that generalize well has become an active area of research recently. Towards this end, we empirically study the dynamics of SGD when training over-parametrized deep networks. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive textit{iterations} and tracking various metrics during the training process. We find that the covariance structure of the noise induced due to mini-batches is quite special that allows SGD to descend and explore the loss surface while avoiding barriers along its path. Specifically, our experiments show evidence that for the most part of training, SGD explores regions along a valley by bouncing off valley walls at a height above the valley floor. This u0027bouncing off walls at a heightu0027 mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the floor of the valley has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.
Year
Venue
Field
2018
arXiv: Machine Learning
Stochastic gradient descent,Mathematical optimization,Interpolation,Regular polygon,Initialization,Mathematics,Deep neural networks,Trajectory,Traverse
DocType
Volume
Citations 
Journal
abs/1802.08770
8
PageRank 
References 
Authors
0.51
26
4
Name
Order
Citations
PageRank
Chen Xing182.20
Devansh Arpit214614.24
Christos Tsirigotis380.85
Yoshua Bengio4426773039.83