Title
Using score distributions to compare statistical significance tests for information retrieval evaluation
Abstract
AbstractStatistical significance tests can provide evidence that the observed difference in performance between 2 methods is not due to chance. In information retrieval (IR), some studies have examined the validity and suitability of such tests for comparing search systems. We argue here that current methods for assessing the reliability of statistical tests suffer from some methodological weaknesses, and we propose a novel way to study significance tests for retrieval evaluation. Using Score Distributions, we model the output of multiple search systems, produce simulated search results from such models, and compare them using various significance tests. A key strength of this approach is that we assess statistical tests under perfect knowledge about the truth or falseness of the null hypothesis. This new method for studying the power of significance tests in IR evaluation is formal and innovative. Following this type of analysis, we found that both the sign test and Wilcoxon signed test have more power than the permutation test and the t‐test. The sign test and Wilcoxon signed test also have good behavior in terms of type I errors. The bootstrap test shows few type I errors, but it has less power than the other methods tested.
Year
DOI
Venue
2019
10.1002/asi.24203
Periodicals
Field
DocType
Volume
Data mining,Significance testing,Information retrieval,Computer science,Permutation,Information science,Wilcoxon signed-rank test,Statistical significance,Bootstrapping (electronics),Statistical hypothesis testing,Preprint
Journal
71
Issue
ISSN
Citations 
1
2330-1635
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Javier Parapar118825.91
D. E. Losada2272.76
Manuel A. Presedo Quindimil300.34
Alvaro Barreiro422622.42