Abstract | ||
---|---|---|
One of promising directions in research on learning to rank concerns the problem of appropriate choice of the objective function to maximize by means of machine learning algorithms. We describe a novel technique of smoothing an arbitrary ranking metric and demonstrate how to utilize it to maximize the retrieval quality in terms of the $NDCG$ metric. The idea behind our listwise ranking model called TieRank is artificial probabilistic tying of predicted relevance scores at each iteration of learning process, which defines a distribution on the set of all permutations of retrieved documents. Such distribution provides a desired smoothed version of the target retrieval quality metric. This smooth function is possible to maximize using a gradient descent method. Experiments on LETOR collections show that TieRank outperforms most of the existing learning to rank algorithms. |
Year | DOI | Venue |
---|---|---|
2011 | 10.1145/2063576.2063888 | CIKM |
Keywords | Field | DocType |
novel technique,artificial probabilistic,smooth function,target retrieval quality metric,smoothing ndcg metrics,letor collection,retrieval quality,gradient descent method,listwise ranking model,appropriate choice,objective function,learning to rank,information retrieval,machine learning | Online machine learning,Data mining,Learning to rank,Gradient descent,Ranking SVM,Ranking,Computer science,Permutation,Smoothing,Artificial intelligence,Probabilistic logic,Machine learning | Conference |
Citations | PageRank | References |
1 | 0.41 | 5 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Andrey Kustarev | 1 | 31 | 2.26 |
Yury Ustinovskiy | 2 | 29 | 4.59 |
Yury Logachev | 3 | 4 | 1.50 |
Evgeny Grechnikov | 4 | 1 | 0.41 |
Ilya Segalovich | 5 | 143 | 9.67 |
Pavel Serdyukov | 6 | 1341 | 90.10 |