Title
How Hard Is Robust Mean Estimation?
Abstract
Robust mean estimation is the problem of estimating the mean $mu mathbb{R}^d$ of a $d$-dimensional distribution $D$ from a list of independent samples, an $epsilon$-fraction of which have been arbitrarily corrupted by a malicious adversary. Recent algorithmic progress has resulted in the first polynomial-time algorithms which achieve emph{dimension-independent} rates of error: for instance, if $D$ has covariance $I$, in polynomial-time one may find $hat{mu}$ with $|mu - hat{mu}| leq O(sqrt{epsilon})$. However, error rates achieved by current polynomial-time algorithms, while dimension-independent, are sub-optimal in many natural settings, such as when $D$ is sub-Gaussian, or has bounded $4$-th moments. In this work we give worst-case complexity-theoretic evidence that improving on the error rates of current polynomial-time algorithms for robust mean estimation may be computationally intractable in natural settings. We show that several natural approaches to improving error rates of current polynomial-time robust mean estimation algorithms would imply efficient algorithms for the small-set expansion problem, refuting Raghavendra and Steureru0027s small-set expansion hypothesis (so long as $P neq NP$). We also give the first direct reduction to the robust mean estimation problem, starting from a plausible but nonstandard variant of the small-set expansion problem.
Year
Venue
Field
2019
COLT
Discrete mathematics,Combinatorics,Mean estimation,Mathematics,Covariance,Bounded function
DocType
Volume
Citations 
Journal
abs/1903.07870
0
PageRank 
References 
Authors
0.34
12
2
Name
Order
Citations
PageRank
Samuel Hopkins1889.47
Jerry Li222922.67