Title
Deep double descent: where bigger models and more data hurt*
Abstract
We show that a variety of modern deep learning tasks exhibit a 'double-descent' phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
Year
DOI
Venue
2020
10.1088/1742-5468/ac3a74
JOURNAL OF STATISTICAL MECHANICS-THEORY AND EXPERIMENT
Keywords
DocType
Volume
deep learning, machine learning, statistical inference
Conference
2021
Issue
ISSN
Citations 
12
1742-5468
2
PageRank 
References 
Authors
0.64
15
6
Name
Order
Citations
PageRank
Preetum Nakkiran1646.05
Gal Kaplun220.98
Yamini Kannan315814.59
Tristan Yang420.64
Boaz Barak52563127.61
Ilya Sutskever6258141120.24