Title
On-line Learning with Abstention.
Abstract
We present an extensive study of the key problem of online learning where algorithms are allowed to abstain from making predictions. In the adversarial setting, we show how existing online algorithms and guarantees can be adapted to this problem. In the stochastic setting, we first point out a bias problem that limits the straightforward extension of algorithms such as UCB-N to time-varying feedback graphs, as needed in this context. Next, we give a new algorithm, UCB-GT, that exploits historical data and is adapted to time-varying feedback graphs. We show that this algorithm benefits from more favorable regret guarantees than a possible, but limited, extension of UCB-N. We further report the results of a series of experiments demonstrating that UCB-GT largely outperforms that extension of UCB-N, as well as more standard baselines.
Year
Venue
DocType
2018
ICML
Conference
Volume
Citations 
PageRank 
abs/1703.03478
3
0.48
References 
Authors
6
4
Name
Order
Citations
PageRank
Corinna Cortes165741120.50
Giulia DeSalvo2736.45
Mehryar Mohri34502448.21
Yang, Scott4336.24