Title
Cross-Lingual Training of Neural Models for Document Ranking.
Abstract
We tackle the challenge of cross-lingual training of neural document ranking models for mono-lingual retrieval, specifically leveraging relevance judgments in English to improve search in non-English languages. Our work successfully applies multi-lingual BERT (mBERT) to document ranking and additionally compares against a number of alternatives: translating the training data, translating documents, multi-stage hybrids, and ensembles. Experiments on test collections in six different languages from diverse language families reveal many interesting findings: model-based relevance transfer using mBERT can significantly improve search quality in (non-English) mono-lingual retrieval, but other “low resource” approaches are competitive as well.
Year
DOI
Venue
2020
10.18653/V1/2020.FINDINGS-EMNLP.249
EMNLP
DocType
Volume
Citations 
Conference
2020.findings-emnlp
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Peng Shi1123.30
He Bai200.34
Jimmy Lin34800376.93