Title
Ballooning Multi-Armed Bandits
Abstract
We introduce ballooning multi-armed bandits (BL-MAB), a novel extension to the classical stochastic MAB model. In the BL-MAB model, the set of available arms grows (or balloons) over time. The regret in a BL-MAB setting is computed with respect to the best available arm at each time. We first observe that the existing stochastic MAB algorithms are not regret-optimal for the BL-MAB model. We show that if the best arm is equally likely to arrive at any time, a sub-linear regret cannot be achieved, irrespective of the arrival of other arms. We further show that if the best arm is more likely to arrive in the early rounds, one can achieve sub-linear regret. Making reasonable assumptions on the arrival distribution of the best arm in terms of the thinness of the distribution's tail, we prove that the proposed algorithm achieves sub-linear instance-independent regret. We further quantify explicit dependence of regret on the arrival distribution parameters.
Year
DOI
Venue
2020
10.5555/3398761.3399003
AAMAS '19: International Conference on Autonomous Agents and Multiagent Systems Auckland New Zealand May, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7518-4
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ganesh Ghalme132.42
Swapnil Dhamal2248.00
Shweta Jain321.37
Sujit Gujar47625.33
Y. Narahari569998.97