Title
Comparison of Evaluation Metrics in Classification Applications with Imbalanced Datasets
Abstract
A new framework is proposed for comparing evaluation metrics in classification applications with imbalanced datasets (i.e., the probability of one class vastly exceeds others). For model selection as well as testing the performance of a classifier, this framework finds the most suitable evaluation metric amongst a number of metrics. We apply this framework to compare two metrics: overall accuracy and Kappa coefficient. Simulation results demonstrate that Kappa coefficient is more suitable.
Year
DOI
Venue
2008
10.1109/ICMLA.2008.34
ICMLA
Keywords
Field
DocType
classification applications,imbalanced datasets,evaluation metrics,kappa coefficient,overall accuracy,suitable evaluation,classification application,simulation result,new framework,model selection,artificial neural networks,computational modeling,testing,accuracy
Data mining,Computer science,Model selection,Cohen's kappa,Artificial intelligence,Classifier (linguistics),Artificial neural network,Machine learning
Conference
Citations 
PageRank 
References 
15
0.83
4
Authors
6
Name
Order
Citations
PageRank
Mehrdad Fatourechi116911.96
Rabab K Ward21440135.88
Steven G. Mason3181.57
Jane Huggins4150.83
Alois Schlögl534163.95
Gary E. Birch68211.36