Title
Implicit Regularization Via Neural Feature Alignment
Abstract
We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al. (2018), along a small number of task-relevant directions. This can be interpreted as a combined mechanism of feature selection and compression. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we motivate and study a heuristic complexity measure that captures this phenomenon, in terms of sequences of tangent kernel classes along optimization paths. The code for our experiments is available as https://github.com/tfjgeorge/ntk_alignment.
Year
Venue
Keywords
2021
24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS)
Rademacher complexity,Deep learning,Regularization (mathematics),Feature selection,Kernel (linear algebra),Tangent,Heuristic,Linear model,Algorithm,Computer science,Artificial intelligence
DocType
Volume
ISSN
Conference
130
2640-3498
Citations 
PageRank 
References 
0
0.34
0
Authors
7
Name
Order
Citations
PageRank
Aristide Baratin1252.69
Thomas George241.41
César Laurent3875.29
R Devon Hjelm400.34
Guillaume Lajoie586.67
Pascal Vincent67834504.76
Simon Lacoste-Julien700.34