Title
Agnostic Learning of a Single Neuron with Gradient Descent
Abstract
We consider the problem of learning the best-fitting single neuron as measured by the expected squared loss $\mathbb{E}_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2]$ over an unknown joint distribution of the features and labels by using gradient descent on the empirical risk induced by a set of i.i.d. samples $S \sim \mathcal{D}^n$. The activation function $\sigma$ is an arbitrary Lipschitz and non-decreasing function, making the optimization problem nonconvex and nonsmooth in general, and covers typical neural network activation functions and inverse link functions in the generalized linear model setting. In the agnostic PAC learning setting, where no assumption on the relationship between the labels $y$ and the features $x$ is made, if the population risk minimizer $v$ has risk $\mathsf{OPT}$, we show that gradient descent achieves population risk $O( \mathsf{OPT}^{1/2} )+\epsilon$ in polynomial time and sample complexity. When labels take the form $y = \sigma(v^\top x) + \xi$ for zero-mean sub-Gaussian noise $\xi$, we show that gradient descent achieves population risk $\mathsf{OPT} + \epsilon$. Our sample complexity and runtime guarantees are (almost) dimension independent, and when $\sigma$ is strictly increasing and Lipschitz, require no distributional assumptions beyond boundedness. For ReLU, we show the same results under a nondegeneracy assumption for the marginal distribution of the features. To the best of our knowledge, this is the first result for agnostic learning of a single neuron using gradient descent.
Year
Venue
DocType
2020
NIPS 2020
Conference
Volume
Citations 
PageRank 
33
0
0.34
References 
Authors
0
3
Name
Order
Citations
PageRank
Frei, Spencer112.04
Cao Yuan200.34
Quanquan Gu3111678.25