Title
Large-scale validation and analysis of interleaved search evaluation
Abstract
Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.
Year
DOI
Venue
2012
10.1145/2094072.2094078
ACM Trans. Inf. Syst.
Keywords
Field
DocType
statistical efficiency,popular technique,information retrieval system,implicit user feedback,different interleaving variant,large-scale validation,comprehensive analysis,observational implicit feedback measure,interleaved search evaluation,online technique,complete picture,retrieval system,sensitivity,empirical evidence,search engine,interleaving
Scientific literature,Data mining,Search engine,Empirical evidence,Information retrieval,Computer science,Online evaluation,Interleaving
Journal
Volume
Issue
ISSN
30
1
1046-8188
Citations 
PageRank 
References 
84
3.84
47
Authors
4
Name
Order
Citations
PageRank
olivier chapelle15960455.12
Thorsten Joachims2173871254.06
Filip Radlinski32644122.55
Yisong Yue4120073.85