Title
A Nearly Optimal Contextual Bandit Algorithm.
Abstract
We investigate the contextual multi-armed bandit problem in an adversarial setting and introduce an online algorithm that asymptotically achieves the performance of the best contextual bandit arm selection strategy under certain conditions. We show that our algorithm is highly efficient and provides significantly improved performance with a guaranteed performance upper bound in a strong mathematical sense. We have no statistical assumptions on the context vectors and the loss of the bandit arms, hence our results are guaranteed to hold even in adversarial environments. We use a tree notion in order to partition the space of context vectors in a nested structure. Using this tree, we construct a large class of context dependent bandit arm selection strategies and adaptively combine them to achieve the performance of the best strategy. We use the hierarchical nature of introduced tree to implement this combination with a significantly low computational complexity, thus our algorithm can be efficiently used in applications involving big data. Through extensive set of experiments involving synthetic and real data, we demonstrate significant performance gains achieved by the proposed algorithm with respect to the state-of-the-art adversarial bandit algorithms.
Year
Venue
Field
2016
arXiv: Learning
Online algorithm,Mathematical optimization,Upper and lower bounds,Algorithm,Artificial intelligence,Partition (number theory),Statistical assumption,Big data,Mathematics,Machine learning,Adversarial system,Computational complexity theory
DocType
Volume
Citations 
Journal
abs/1612.01367
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Mohammadreza Mohaghegh Neyshabouri100.34
Kaan Gokcesu285.26
Selami Ciftci3264.58
Suleyman Serdar Kozat412131.32