Abstract | ||
---|---|---|
We consider the problem of online boosting for regression tasks, when only limited information is available to the learner. We give an efficient regret minimization method that has two implications: an online boosting algorithm with noisy multi-point bandit feedback, and a new projection-free online convex optimization algorithm with stochastic gradient, that improves state-of-the-art guarantees in terms of efficiency. |
Year | Venue | DocType |
---|---|---|
2021 | ALT | Conference |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nataly Brukhim | 1 | 1 | 2.04 |
Elad Hazan | 2 | 1619 | 111.90 |