Title
Selecting a subset of queries for acquisition of further relevance judgements
Abstract
Assessing the relative performance of search systems requires the use of a test collection with a pre-defined set of queries and corresponding relevance assessments. The state-of-the-art process of constructing test collections involves using a large number of queries and selecting a set of documents, submitted by a group of participating systems, to be judged per query. However, the initial set of judgments may be insufficient to reliably evaluate the performance of future as yet unseen systems. In this paper, we propose a method that expands the set of relevance judgments as new systems are being evaluated. We assume that there is a limited budget to build additional relevance judgements. From the documents retrieved by the new systems we create a pool of unjudged documents. Rather than uniformly distributing the budget across all queries, we first select a subset of queries that are effective in evaluating systems and then uniformly allocate the budget only across these queries. Experimental results on TREC 2004 Robust track test collection demonstrate the superiority of this budget allocation strategy.
Year
DOI
Venue
2011
10.1007/978-3-642-23318-0_12
ICTIR
Keywords
Field
DocType
limited budget,test collection,robust track test collection,pre-defined set,additional relevance judgement,initial set,budget allocation strategy,new system,relevance judgment,corresponding relevance assessment,human computer interaction,information retrieval
Information system,Information retrieval,Computer science,Budget allocation,Greedy algorithm,Artificial intelligence,Machine learning
Conference
Volume
ISSN
Citations 
6931
0302-9743
8
PageRank 
References 
Authors
0.50
13
5
Name
Order
Citations
PageRank
Mehdi Hosseini1543.77
Ingemar Cox23652795.60
Natasa Milic-Frayling391775.24
Vishwa Vinay424515.94
Trevor Sweeting5171.48