Title
Effects of data grouping on calibration measures of classifiers
Abstract
The calibration of a probabilistic classifier refers to the extend to which its probability estimates match the true class membership probabilities. Measuring the calibration of a classifier usually relies on performing chi-squared goodness-of-fit tests between grouped probabilities and the observations in these groups. We considered alternatives to the Hosmer-Lemeshow test, the standard chi-squared test with groups based on sorted model outputs. Since this grouping does not represent "natural" groupings in data space, we investigated a chi-squared test with grouping strategies in data space. Using a series of artificial data sets for which the correct models are known, and one real-world data set, we analyzed the performance of the Pigeon-Heyse test with groupings by self-organizing maps, k-means clustering, and random assignment of points to groups. We observed that the Pigeon-Heyse test offers slightly better performance than the Hosmer-Lemeshow test while being able to locate regions of poor calibration in data space.
Year
DOI
Venue
2011
10.1007/978-3-642-27549-4_46
EUROCAST (1)
Keywords
Field
DocType
real-world data,poor calibration,chi-squared goodness-of-fit test,chi-squared test,artificial data set,data space,hosmer-lemeshow test,pigeon-heyse test,calibration measure,standard chi-squared test,better performance
Data set,Pattern recognition,Computer science,Hosmer–Lemeshow test,Artificial intelligence,Classifier (linguistics),Probabilistic classification,Cluster analysis,Goodness of fit,Calibration (statistics),Calibration
Conference
Volume
ISSN
Citations 
6927
0302-9743
1
PageRank 
References 
Authors
0.48
4
2
Name
Order
Citations
PageRank
Stephan Dreiseitl133834.80
Melanie Osl2716.83