Title
Learning Bounds for Importance Weighting.
Abstract
This paper presents an analysis of importance weighting for learning from finite samples and gives a series of theoretical and algorithmic results. We point out simple cases where importance weighting can fail, which suggests the need for an analysis of the properties of this technique. We then give both upper and lower bounds for generalization with bounded importance weights and, more significantly, give learning guarantees for the more common case of unbounded importance weights under the weak assumption that the second moment is bounded, a condition related to the Renyi divergence of the training and test distributions. These results are based on a series of novel and general bounds we derive for unbounded loss functions, which are of independent interest. We use these bounds to guide the definition of an alternative reweighting algorithm and report the results of experiments demonstrating its benefits. Finally, we analyze the properties of normalized importance weights which are also commonly used.
Year
Venue
Field
2010
NIPS
Mathematical optimization,Weighting,Divergence,Normalization (statistics),Upper and lower bounds,Mathematics,Bounded function,Second moment of area
DocType
Citations 
PageRank 
Conference
51
1.94
References 
Authors
16
3
Name
Order
Citations
PageRank
Corinna Cortes165741120.50
Yishay Mansour26211745.95
Mehryar Mohri34502448.21