Title
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint
Abstract
This paper investigates the generalization properties of two-layer neural networks in high-dimensions, i.e. when the number of samples $n$, features $d$, and neurons $h$ tend to infinity at the same rate. Specifically, we derive the exact population risk of the unregularized least squares regression problem with two-layer neural networks when either the first or the second layer is trained using a gradient flow under different initialization setups. When only the second layer coefficients are optimized, we recover the \textit{double descent} phenomenon: a cusp in the population risk appears at $h\approx n$ and further overparameterization decreases the risk. In contrast, when the first layer weights are optimized, we highlight how different scales of initialization lead to different inductive bias, and show that the resulting risk is \textit{independent} of overparameterization. Our theoretical and experimental results suggest that previously studied model setups that provably give rise to \textit{double descent} might not translate to optimizing two-layer neural networks.
Year
Venue
Keywords
2020
ICLR
Neural Networks, Generalization, High-dimensional Statistics
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
39
5
Name
Order
Citations
PageRank
Lei Jimmy Ba18887296.55
Erdogdu, Murat A.212.37
taiji357745.13
Denny C.-Y. Wu414.06
Tianzong Zhang500.34