Title
Quality versus efficiency in document scoring with learning-to-rank models.
Abstract
Characterization the quality versus cost trade-off of Learning-to-Rank models.QuickRank: a public-domain Learning-to-Rank learning and evaluation framework.a new measure, named AuQC, for the evaluation of LtR algorithms. Learning-to-Rank (LtR) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications - e.g., it directly impacts on response time and throughput of Web query processing - it has received relatively little attention so far.The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features.We extensively analyze the quality versus efficiency trade-off of a wide spectrum of state-of-the-art LtR, and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees (GBRT), Lambda-Mart (λ-MART), and the first public-domain implementation of Oblivious Lambda-Mart (źλ-MART), an algorithm that induces forests of oblivious regression trees.We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the quality-cost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget.
Year
DOI
Venue
2016
10.1016/j.ipm.2016.05.004
Inf. Process. Manage.
Keywords
Field
DocType
Efficiency,Learning-to-rank,Document scoring
Data mining,Linear combination,Learning to rank,Computer science,Response time,Implementation,Artificial intelligence,Throughput,Web search query,Information retrieval,Ranking,Regression,Machine learning
Journal
Volume
Issue
ISSN
52
6
0306-4573
Citations 
PageRank 
References 
16
0.85
23
Authors
6
Name
Order
Citations
PageRank
Gabriele Capannini1838.90
Claudio Lucchese2110473.76
Franco Maria Nardini331436.52
Salvatore Orlando41595202.29
Raffaele Perego51471108.91
Nicola Tonellotto637739.90