Title
Regression Oracles and Exploration Strategies for Short-Horizon Multi-Armed Bandits
Abstract
This paper explores multi-armed bandit (MAB) strategies in very short horizon scenarios, i.e., when the bandit strategy is only allowed very few interactions with the environment. This is an understudied setting in the MAB literature with many applications in the context of games, such as player modeling. Specifically, we pursue three different ideas. First, we explore the use of regression oracles, which replace the simple average used in strategies such as ε-greedy with linear regression models. Second, we examine different exploration patterns such as forced exploration phases. Finally, we introduce a new variant of the UCB1 strategy called UCBT that has interesting properties and no tunable parameters. We present experimental results in a domain motivated by exergames, where the goal is to maximize a player's daily steps. Our results show that the combination of ε-greedy or ε-decreasing with regression oracles outperforms all other tested strategies in the short horizon setting.
Year
DOI
Venue
2020
10.1109/CoG47356.2020.9231529
2020 IEEE Conference on Games (CoG)
Keywords
DocType
ISSN
multi-armed bandit,player modeling,machine learning,linear regression,reinforcement learning
Conference
2325-4270
ISBN
Citations 
PageRank 
978-1-7281-4534-1
0
0.34
References 
Authors
3
3
Name
Order
Citations
PageRank
Robert C. Gray100.34
Jichen Zhu211129.76
Santiago Ontañón361978.32