Title
Empirical Error-Confidence Curves For Neural Network And Gaussian Classifiers
Abstract
''Error-Confidence'' measures the probability that the proportion of errors made by a classifier will be within epsilon of E(B), the optimal (Bayes) error. Probably Almost Bayes (PAB) theory attempts to quantify how this confidence increases with the number of training samples. We investigate the relationship empirically by comparing average error versus number of training patterns (m) for linear and neural network classifiers. On Gaussian problems, the resulting EC curves demonstrate that the PAB bounds are extremely conservative. Asymptotic statistics predicts a linear relationship between the logarithms of the average error and the number of training patterns. For low Bayes error rates we found excellent agreement between the prediction and the linear discriminant performance. At higher Bayes error rates we still found a linear relationship, but with a shallower slope than the predicted -1. When the underlying true model is a three-layer network, the EC curves show a greater dependence on classifier capacity, and the Linear predictions no longer seem to hold.
Year
DOI
Venue
1996
10.1142/S0129065796000245
INTERNATIONAL JOURNAL OF NEURAL SYSTEMS
Field
DocType
Volume
Normal distribution,Artificial intelligence,Artificial neural network,Confidence interval,Bayes error rate,Bayes' theorem,Naive Bayes classifier,Pattern recognition,Gaussian,Linear discriminant analysis,Statistics,Machine learning,Mathematics
Journal
7
Issue
ISSN
Citations 
3
0129-0657
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Gregory J. Wolff121240.46
David G. Stork2627106.17
Art B. Owen327037.03