Title
Testing the calibration of classification models from first principles.
Abstract
The accurate assessment of the calibration of classification models is severely limited by the fact that there is no easily available gold standard against which to compare a model's outputs. The usual procedures group expected and observed probabilities, and then perform a χ(2) goodness-of-fit test. We propose an entirely new approach to calibration testing that can be derived directly from the first principles of statistical hypothesis testing. The null hypothesis is that the model outputs are correct, i.e., that they are good estimates of the true unknown class membership probabilities. Our test calculates a p-value by checking how (im)probable the observed class labels are under the null hypothesis. We demonstrate by experiments that our proposed test performs comparable to, and sometimes even better than, the Hosmer-Lemeshow goodness-of-fit test, the de facto standard in calibration assessment.
Year
Venue
Keywords
2012
AMIA
calibration,area under curve
Field
DocType
Volume
Data mining,De facto standard,Test statistic,Computer science,Null hypothesis,Statistics,Calibration (statistics),Calibration,Statistical hypothesis testing
Conference
2012
ISSN
Citations 
PageRank 
1942-597X
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Stephan Dreiseitl133834.80
Melanie Osl2716.83