Abstract | ||
---|---|---|
Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, the objective we would like to optimize in such scenarios is typically a globaln on-decomposable one such as the area under the precision-recall curve, the $F_beta$ score, precision at fixed recall, etc. In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale systems are trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracy-objective baseline. |
Year | Venue | Field |
---|---|---|
2016 | arXiv: Machine Learning | Data mining,Ranking,Computer science,Artificial intelligence,Recall,Machine learning,Scalability |
DocType | Volume | Citations |
Journal | abs/1608.04802 | 1 |
PageRank | References | Authors |
0.42 | 4 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Elad Eban | 1 | 29 | 4.86 |
Mariano Schain | 2 | 1 | 0.42 |
Ariel Gordon | 3 | 1 | 0.76 |
Rif Saurous | 4 | 148 | 10.49 |
Elidan, Gal | 5 | 871 | 77.14 |