Abstract | ||
---|---|---|
Learning-to-rank algorithms, which can automatically adapt ranking functions in web search, require a large volume of training data. A traditional way of generating training examples is to employ human experts to judge the relevance of documents. Unfortunately, it is difficult, time-consuming and costly. In this paper, we study the problem of exploiting click-through data for learning web search rankings that can be collected at much lower cost. We extract pairwise relevance preferences from a large-scale aggregated click-through dataset, compare these preferences with explicit human judgments, and use them as training examples to learn ranking functions. We find click-through data are useful and effective in learning ranking functions. A straightforward use of aggregated click-through data can outperform human judgments. We demonstrate that the strategies are only slightly affected by fraudulent clicks. We also reveal that the pairs which are very reliable, e.g., the pairs consisting of documents with large click frequency differences, are not sufficient for learning. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1145/1458082.1458095 | CIKM |
Keywords | Field | DocType |
training example,ranking function,training data,human expert,click-through data,large-scale aggregated click-through dataset,web search ranking,explicit human judgment,human judgment,aggregated click-through data,learning to rank | Training set,Data mining,Learning to rank,Pairwise comparison,Click-through rate,Ranking,Information retrieval,Computer science,Artificial intelligence,Machine learning | Conference |
Citations | PageRank | References |
45 | 1.30 | 20 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhicheng Dou | 1 | 706 | 41.96 |
Ruihua Song | 2 | 1138 | 59.33 |
Xiao-Jie Yuan | 3 | 255 | 34.96 |
Ji-Rong Wen | 4 | 4431 | 265.98 |