Title
Consistency of structured output learning with missing labels.
Abstract
In this paper we study statistical consistency of partial losses suitable for learning structured output predictors from examples containing missing labels. We provide sufficient conditions on data generating distribution which admit to prove that the expected risk of the structured predictor learned by minimizing the partial loss converges to the optimal Bayes risk defined by an associated complete loss. We define a concept of surrogate classification calibrated partial losses which are easier to optimize yet their minimization preserves the statistical consistency. We give some concrete examples of surrogate partial losses which are classification calibrated. In particular, we show that the ramp-loss which is in the core of many existing algorithms is classification calibrated.
Year
Venue
Field
2015
ACML
Data mining,Computer science,Minification,Artificial intelligence,Partial loss,Machine learning,Bayes' theorem
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
10
3
Name
Order
Citations
PageRank
Kostiantyn Antoniuk151.46
Vojtěch Franc258455.78
Václav Hlavác361685.46