Abstract | ||
---|---|---|
In this paper, we consider the problem of sequentially optimizing a black-box function $f$ based on noisy samples and bandit feedback. We assume that $f$ is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after $T$ rounds, and on the cumulative regret, measuring the sum of regrets over the $T$ chosen points. For the isotropic squared-exponential kernel in $d$ dimensions, we find that an average simple regret of $epsilon$ requires $T = Omegabig(frac{1}{epsilon^2} (logfrac{1}{epsilon})^{d/2}big)$, and the average cumulative regret is at least $Omegabig( sqrt{T(log T)^d} big)$, thus matching existing upper bounds up to the replacement of $d/2$ by $d+O(1)$ in both cases. For the Matu0027ern-$nu$ kernel, we give analogous bounds of the form $Omegabig( (frac{1}{epsilon})^{2+d/nu}big)$ and $Omegabig( T^{frac{nu + d}{2nu + d}} big)$, and discuss the resulting gaps to the existing upper bounds. |
Year | Venue | DocType |
---|---|---|
2017 | COLT | Conference |
Volume | Citations | PageRank |
abs/1706.00090 | 2 | 0.39 |
References | Authors | |
9 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jonathan Scarlett | 1 | 163 | 31.49 |
Ilija Bogunovic | 2 | 29 | 7.33 |
Volkan Cevher | 3 | 1860 | 141.56 |