Title
Regret Minimization With Concept Drift
Abstract
In standard online learning, the goal of the learner is to maintain an average loss close to the loss of the best-performing function in a xed class. Classic results show that simple algorithms can achieve an average loss arbitrarily close to that of the best function in retrospect, even when input and output pairs are chosen by an adversary. However, in many real-world applications such as spam prediction and classication of news articles, the best target function may be drifting over time. We introduce a novel model of concept drift in which an adversary is given control of both the distribution over input at each time step and the corresponding labels. The goal of the learner is to maintain an average loss close to the 0/1 loss of the best slowly changing sequence of functions with no more than K large shifts. We provide regret bounds for learning in this model using an (inecient) reduction to the standard no-regret setting. We then go on to provide and analyze an ecient algorithm for learning d-dimensional hyperplanes with drift. We conclude with some simulations illustrating the circumstances under which this algorithm outperforms other commonly studied algorithms when the target hyperplane is drifting.
Year
Venue
Keywords
2010
COLT
concept drift
Field
DocType
Citations 
Online learning,Mathematical optimization,Regret,Regret minimization,Computer science,Input/output,Concept drift,Artificial intelligence,SIMPLE algorithm,Hyperplane,Adversary,Machine learning
Conference
14
PageRank 
References 
Authors
0.68
13
4
Name
Order
Citations
PageRank
Koby Crammer15252466.86
Yishay Mansour26211745.95
Eyal Even-dar394873.71
Jennifer Wortman Vaughan492942.23