Title
Quantifying Learning Guarantees for Convex but Inconsistent Surrogates.
Abstract
We study consistency properties of machine learning methods based on minimizing convex surrogates. We extend the recent framework of Osokin et al. [14] for the quantitative analysis of consistency properties to the case of inconsistent surrogates. Our key technical contribution consists in a new lower bound on the calibration function for the quadratic surrogate, which is non-trivial (not always zero) for inconsistent cases. The new bound allows to quantify the level of inconsistency of the setting and shows how learning with inconsistent surrogates can have guarantees on sample complexity and optimization difficulty. We apply our theory to two concrete cases: multi-class classification with the tree-structured loss and ranking with the mean average precision loss. The results show the approximation-computation trade-offs caused by inconsistent surrogates and their potential benefits.
Year
Venue
Keywords
2018
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018)
sample complexity,quantitative analysis,mean average precision,parallel optimization,hierarchical classification
DocType
Volume
ISSN
Conference
31
1049-5258
Citations 
PageRank 
References 
0
0.34
16
Authors
3
Name
Order
Citations
PageRank
Struminsky, Kirill100.34
Simon Lacoste-Julien2113862.72
A. Osokin343019.01