Title
Robustly Learning a Gaussian: Getting Optimal Error, Efficiently.
Abstract
We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise --- where an ε-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ϵ) in the total variation distance, which is optimal up to a universal constant that is independent of the dimension. In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of [EQUATION] and the running time is polynomial in d and 1/ϵ. When both the mean and covariance are unknown, the running time is polynomial in d and quasipolynomial in 1/ϵ. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings.
Year
DOI
Venue
2018
10.5555/3174304.3175475
SODA '18: Symposium on Discrete Algorithms New Orleans Louisiana January, 2018
DocType
Volume
ISBN
Conference
abs/1704.03866
978-1-61197-503-1
Citations 
PageRank 
References 
7
0.46
9
Authors
6
Name
Order
Citations
PageRank
Ilias Diakonikolas177664.21
Gautam Kamath212616.77
Daniel M. Kane374361.43
Jerry Li422922.67
Ankur Moitra589256.19
Alistair Stewart617517.04