Synergy and Symmetry in Deep Learning: Interactions between the Data, Model, and Inference Algorithm. | 0 | 0.34 | 2022 |
Exploring the Uncertainty Properties of Neural Networks' Implicit Priors in the Infinite-Width Limit | 0 | 0.34 | 2021 |
Neural Tangents - Fast and Easy Infinite Neural Networks in Python. | 0 | 0.34 | 2020 |
Disentangling Trainability and Generalization in Deep Neural Networks | 0 | 0.34 | 2020 |
Finite Versus Infinite Neural Networks: an Empirical Study | 0 | 0.34 | 2020 |
The Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks | 0 | 0.34 | 2020 |
Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks | 0 | 0.34 | 2020 |
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent. | 12 | 0.48 | 2019 |
Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes | 13 | 0.52 | 2019 |
Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10, 000-Layer Vanilla Convolutional Neural Networks. | 17 | 0.59 | 2018 |