Abstract | ||
---|---|---|
We study the fundamental problem of learning the parameters of a high-dimensional Gaussian in the presence of noise --- where an ε-fraction of our samples were chosen by an adversary. We give robust estimators that achieve estimation error O(ϵ) in the total variation distance, which is optimal up to a universal constant that is independent of the dimension.
In the case where just the mean is unknown, our robustness guarantee is optimal up to a factor of [EQUATION] and the running time is polynomial in d and 1/ϵ. When both the mean and covariance are unknown, the running time is polynomial in d and quasipolynomial in 1/ϵ. Moreover all of our algorithms require only a polynomial number of samples. Our work shows that the same sorts of error guarantees that were established over fifty years ago in the one-dimensional setting can also be achieved by efficient algorithms in high-dimensional settings.
|
Year | DOI | Venue |
---|---|---|
2018 | 10.5555/3174304.3175475 | SODA '18: Symposium on Discrete Algorithms
New Orleans
Louisiana
January, 2018 |
DocType | Volume | ISBN |
Conference | abs/1704.03866 | 978-1-61197-503-1 |
Citations | PageRank | References |
7 | 0.46 | 9 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Ilias Diakonikolas | 1 | 776 | 64.21 |
Gautam Kamath | 2 | 126 | 16.77 |
Daniel M. Kane | 3 | 743 | 61.43 |
Jerry Li | 4 | 229 | 22.67 |
Ankur Moitra | 5 | 892 | 56.19 |
Alistair Stewart | 6 | 175 | 17.04 |