Abstract | ||
---|---|---|
Many of the recently proposed algorithms for learning feature-based ranking functions are based on the pairwise preference framework, in which instead of taking documents in isolation, document pairs are used as instances in the learning process. One disadvantage of this process is that a noisy relevance judgment on a single document can lead to a large number of mis-labeled document pairs. This can jeopardize robustness and deteriorate overall ranking performance. In this paper we study the effects of outlying pairs in rank learning with pairwise preferences and introduce a new meta-learning algorithm capable of suppressing these undesirable effects. This algorithm works as a second optimization step in which any linear baseline ranker can be used as input. Experiments on eight different ranking datasets show that this optimization step produces statistically significant performance gains over state-of-the-art methods. |
Year | DOI | Venue |
---|---|---|
2008 | 10.1145/1458082.1458348 | CIKM |
Keywords | Field | DocType |
single document,feature-based ranking function,pairwise preference ranking,mis-labeled document pair,pairwise preference,suppressing outlier,optimization step,algorithm work,different ranking datasets,document pair,pairwise preference framework,overall ranking performance,ranking | Learning to rank,Data mining,Pairwise comparison,Ranking SVM,Ranking,Computer science,Outlier,Robustness (computer science),Ranking (information retrieval),Preference learning,Artificial intelligence,Machine learning | Conference |
Citations | PageRank | References |
3 | 0.37 | 5 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Vitor R. Carvalho | 1 | 672 | 36.38 |
Jonathan Elsas | 2 | 399 | 18.04 |
William W. Cohen | 3 | 10178 | 1243.74 |
Jaime G. Carbonell | 4 | 5019 | 724.15 |