Title
Multi-Armed Bandits on Unit Interval Graphs.
Abstract
An online learning problem with side information on the similarity and dissimilarity across different actions is considered. The problem is formulated as a stochastic multi-armed bandit problem with a graph-structured learning space. Each node in the graph represents an arm in the bandit problem and an edge between two nodes represents closeness in their mean rewards. It is shown that the resulting graph is a unit interval graph. A hierarchical learning policy is developed that offers sublinear scaling of regret with the size of the learning space by fully exploiting the side information through an offline reduction of the learning space and online aggregation of reward observations from similar arms. The order optimality of the proposed policy in terms of both the size of the learning space and the length of the time horizon is established through a matching lower bound on regret. It is further shown that when the mean rewards are bounded, complete learning with bounded regret over an infinite time horizon can be achieved. An extension to the case with only partial information on arm similarity and dissimilarity is also discussed.
Year
Venue
Field
2018
arXiv: Learning
Sublinear function,Mathematical optimization,Time horizon,Regret,Closeness,Upper and lower bounds,Online aggregation,Scaling,Mathematics,Bounded function
DocType
Volume
Citations 
Journal
abs/1802.04339
0
PageRank 
References 
Authors
0.34
20
4
Name
Order
Citations
PageRank
Xiao Xu113.39
Sattar Vakili2428.83
Qing Zhao3192.50
Swami, A.45105566.62