Title
Online Learning With Diverse User Preferences
Abstract
In this paper, we investigate the impact of diverse user preference on learning under the stochastic multi-armed bandit (MAB) framework. We aim to show that when the user preferences are sufficiently diverse and each arm is optimal for certain users, the O(log T) regret incurred by exploring the suboptimal arms under the standard stochastic MAB setting can be reduced to a constant. Our intuition is that to achieve sub-linear regret, the number of times an optimal arm being pulled should scale linearly in time; when all arms are optimal for certain users and pulled frequently, the estimated arm statistics can quickly converge to their true values, thus reducing the need of exploration dramatically. We cast the problem into a stochastic linear bandits model, where both user preferences and arm states are modeled as independent and identical distributed (i.i.d) d-dimensional random vectors. After receiving a user preference vector at the beginning of each time slot, the learner pulls an arm and receives a reward as the linear product of the preference vector and the arm state vector. We also assume that the state of the pulled arm is revealed to the learner once it is pulled. We propose a Weighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve a constant regret when the user preferences are sufficiently diverse. The performance of W-UCB under general setups is also completely characterized and validated with synthetic data.
Year
DOI
Venue
2019
10.1109/ISIT.2019.8849646
2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT)
Field
DocType
Volume
Online learning,Mathematical optimization,State vector,Regret,Intuition,Synthetic data,Mathematics
Journal
abs/1901.07924
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Chao Gan101.35
Jing Yang21941120.91
Ruida Zhou333.44
Cong Shen4132.95