Abstract | ||
---|---|---|
This paper presents \textttConv-KNRM, a Convolutional Kernel-based Neural Ranking Model that models n-gram soft matches for ad-hoc search. Instead of exact matching query and document n-grams, \textttConv-KNRM uses Convolutional Neural Networks to represent n-grams of various lengths and soft matches them in a unified embedding space. The n-gram soft matches are then utilized by the kernel pooling and learning-to-rank layers to generate the final ranking score. \textttConv-KNRM can be learned end-to-end and fully optimized from user feedback. The learned model»s generalizability is investigated by testing how well it performs in a related domain with small amounts of training data. Experiments on English search logs, Chinese search logs, and TREC Web track tasks demonstrated consistent advantages of \textttConv-KNRM over prior neural IR methods and feature-based methods.
|
Year | DOI | Venue |
---|---|---|
2018 | 10.1145/3159652.3159659 | WSDM 2018: The Eleventh ACM International Conference on Web Search and Data Mining
Marina Del Rey
CA
USA
February, 2018 |
Field | DocType | ISBN |
Kernel (linear algebra),Training set,Generalizability theory,Data mining,Embedding,Ranking,Computer science,Convolutional neural network,Pooling | Conference | 978-1-4503-5581-0 |
Citations | PageRank | References |
38 | 1.24 | 26 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhuyun Dai | 1 | 178 | 10.99 |
Chen-Yan Xiong | 2 | 405 | 30.82 |
James P. Callan | 3 | 6237 | 833.28 |
Zhiyuan Liu | 4 | 2037 | 123.68 |