Title | ||
---|---|---|
A framework for determining necessary query set sizes to evaluate web search effectiveness |
Abstract | ||
---|---|---|
We describe a framework of bootstrapped hypothesis testing for estimating the confidence in one web search engine outperforming another over any randomly sampled query set of a given size. To validate this framework, we have constructed and made available a precision-oriented test collection consisting of manual binary relevance judgments for each of the top ten results of ten web search engines across 896 queries and the single best result for each of those queries. Results from this bootstrapping approach over typical query set sizes indicate that examining repeated statistical tests is imperative, as a single test is quite likely to find significant differences that do not necessarily generalize. We also find that the number of queries needed for a repeatable evaluation in a dynamic environment such as the web is much higher than previously studied. |
Year | DOI | Venue |
---|---|---|
2005 | 10.1145/1062745.1062926 | WWW (Special interest tracks and posters) |
Keywords | Field | DocType |
bootstrapping approach,single best result,typical query set size,manual binary relevance judgment,bootstrapped hypothesis,statistical test,web search effectiveness,dynamic environment,precision-oriented test collection,web search engine,necessary query set size,single test,hypothesis test,deep web,probing,crawling | Web search engine,Data mining,Web search query,World Wide Web,Search engine,Information retrieval,Query expansion,Computer science,Bootstrapping,Web query classification,Statistical hypothesis testing,Binary number | Conference |
ISBN | Citations | PageRank |
1-59593-051-5 | 6 | 1.94 |
References | Authors | |
4 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Eric C. Jensen | 1 | 696 | 46.72 |
Steven M. Beitzel | 2 | 696 | 46.72 |
Ophir Frieder | 3 | 3300 | 419.55 |
Abdur Chowdhury | 4 | 2013 | 160.59 |