Abstract | ||
---|---|---|
In this paper we analyze the consistency of loss functions for learning from weakly labelled data, and its relation to properness. We show that the consistency of a given loss depends on the mixing matrix, which is the transition matrix relating the weak labels and the true class. A linear transformation can be used to convert a conventional classification-calibrated (CC) loss into a weak CC loss. By comparing the maximal dimension of the set of mixing matrices that are admissible for a given CC loss with that for proper losses, we show that classification calibration is a much less restrictive condition than properness. Moreover, we show that while the transformation of conventional proper losses into a weak proper losses does not preserve convexity in general, conventional convex CC losses can be easily transformed into weak and convex CC losses. Our analysis provides a general procedure to construct convex CC losses, and to identify the set of mixing matrices admissible for a given transformation. Several examples are provided to illustrate our approach. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1007/978-3-662-44848-9_13 | ECML/PKDD (1) |
Field | DocType | Citations |
Discrete mathematics,Applied mathematics,Convexity,Stochastic matrix,Matrix (mathematics),Regular polygon,Linear map,Mathematics,Calibration | Conference | 2 |
PageRank | References | Authors |
0.40 | 14 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jesús Cid-Sueiro | 1 | 103 | 16.94 |
Darío García-García | 2 | 24 | 4.41 |
Raúl Santos-Rodríguez | 3 | 36 | 12.41 |