Title | ||
---|---|---|
Comparison of Evaluation Metrics in Classification Applications with Imbalanced Datasets |
Abstract | ||
---|---|---|
A new framework is proposed for comparing evaluation metrics in classification applications with imbalanced datasets (i.e., the probability of one class vastly exceeds others). For model selection as well as testing the performance of a classifier, this framework finds the most suitable evaluation metric amongst a number of metrics. We apply this framework to compare two metrics: overall accuracy and Kappa coefficient. Simulation results demonstrate that Kappa coefficient is more suitable. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1109/ICMLA.2008.34 | ICMLA |
Keywords | Field | DocType |
classification applications,imbalanced datasets,evaluation metrics,kappa coefficient,overall accuracy,suitable evaluation,classification application,simulation result,new framework,model selection,artificial neural networks,computational modeling,testing,accuracy | Data mining,Computer science,Model selection,Cohen's kappa,Artificial intelligence,Classifier (linguistics),Artificial neural network,Machine learning | Conference |
Citations | PageRank | References |
15 | 0.83 | 4 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mehrdad Fatourechi | 1 | 169 | 11.96 |
Rabab K Ward | 2 | 1440 | 135.88 |
Steven G. Mason | 3 | 18 | 1.57 |
Jane Huggins | 4 | 15 | 0.83 |
Alois Schlögl | 5 | 341 | 63.95 |
Gary E. Birch | 6 | 82 | 11.36 |