Title
Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms
Abstract
Empirical research in learning algorithms for classification tasks generally requires the use of significance tests. The quality of a test is typically judged on Type I error (how often the test indicates a difference when it should not) and Type 11 error (how often it indicates no difference when it should). In this paper we argue that the replicability of a test is also of importance. We say that a test has low replicability if its outcome strongly depends on the particular random partitioning of the data that is used to perform it. We present empirical measures of replicability and use them to compare the performance of several popular tests in a realistic setting involving standard learning algorithms and benchmark datasets. Based on our results we give recommendations on which test to use.
Year
DOI
Venue
2004
10.1007/978-3-540-24775-3_3
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS
Keywords
Field
DocType
empirical research,type i error,type ii error,significance test,computer science
Data mining,Computer science,Algorithm,Artificial intelligence,Random Number Seed,Type I and type II errors,Cross-validation,Machine learning,Empirical research
Conference
Volume
ISSN
Citations 
3056
0302-9743
85
PageRank 
References 
Authors
12.93
7
2
Name
Order
Citations
PageRank
Remco R. Bouckaert148482.93
Eibe Frank211555619.59