Abstract | ||
---|---|---|
Reducing excessive costs in feature acquisition and model evaluation has been a long-standing challenge in learning-to-rank systems. A cascaded ranking architecture turns ranking into a pipeline of multiple stages, and has been shown to be a powerful approach to balancing efficiency and effectiveness trade-offs in large-scale search systems. However, learning a cascade model is often complex, and usually performed stagewise independently across the entire ranking pipeline. In this work we show that learning a cascade ranking model in this manner is often suboptimal in terms of both effectiveness and efficiency. We present a new general framework for learning an end-to-end cascade of rankers using backpropagation. We show that stagewise objectives can be chained together and optimized jointly to achieve significantly better trade-offs globally. This novel approach is generalizable to not only differentiable models but also state-of-the-art tree-based algorithms such as LambdaMART and cost-efficient gradient boosted trees, and it opens up new opportunities for exploring additional efficiency-effectiveness trade-offs in large-scale search systems.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1145/3289600.3290986 | WSDM |
Keywords | Field | DocType |
cascade ranking, learning to rank, multi-stage retrieval | Learning to rank,Data mining,Ranking,Computer science,Differentiable function,Cascade,Backpropagation | Conference |
ISBN | Citations | PageRank |
978-1-4503-5940-5 | 2 | 0.37 |
References | Authors | |
30 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Luke Gallagher | 1 | 16 | 2.92 |
Ruey-Cheng Chen | 2 | 108 | 11.87 |
Roi Blanco | 3 | 872 | 57.42 |
Shane Culpepper | 4 | 519 | 47.52 |