Title
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent.
Abstract
A longstanding goal in deep learning research has been to precisely characterize training and generalization. However, the often complex loss landscapes of neural networks have made a theory of learning dynamics elusive. In this work, we show that for wide neural networks the learning dynamics simplify considerably and that, in the infinite width limit, they are governed by a linear model obtained from the first-order Taylor expansion of the network around its initial parameters. Furthermore, mirroring the correspondence between wide Bayesian neural networks and Gaussian processes, gradient-based training of wide neural networks with a squared loss produces test set predictions drawn from a Gaussian process with a particular compositional kernel. While these theoretical results are only exact in the infinite width limit, we nevertheless find excellent empirical agreement between the predictions of the original network and those of the linearized version even for finite practically-sized networks. This agreement is robust across different architectures, optimization methods, and loss functions.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
neural networks,gaussian processes
Field
DocType
Volume
Statistical physics,Kernel (linear algebra),Mathematical optimization,Gradient descent,Linear model,Artificial intelligence,Gaussian process,Deep learning,Artificial neural network,Mathematics,Taylor series,Test set
Journal
32
ISSN
Citations 
PageRank 
1049-5258
12
0.48
References 
Authors
34
6
Name
Order
Citations
PageRank
Jaehoon Lee1482.98
Lechao Xiao2423.95
Samuel S. Schoenholz333016.69
Yasaman Bahri41175.80
Jascha Sohl-Dickstein567382.82
Jeffrey Pennington63722134.21