Title
Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
Abstract
ABSTRACTInformation overload is a prevalent challenge in many high-value domains. A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months. In general, biomedical literature expands by two papers every minute, totalling over a million new papers every year. Search in the biomedical realm, and many other vertical domains is challenging due to the scarcity of direct supervision from click logs. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck. We propose a general approach for vertical search based on domain-specific pretraining and present a case study for the biomedical domain. Despite being substantially simpler and not using any relevance labels for training or development, our method performs comparably or better than the best systems in the official TREC-COVID evaluation, a COVID-related biomedical search competition. Using distributed computing in modern cloud infrastructure, our system can scale to tens of millions of articles on PubMed and has been deployed as Microsoft Biomedical Search, a new search experience for biomedical literature: https://aka.ms/biomedsearch.
Year
DOI
Venue
2021
10.1145/3447548.3469053
Knowledge Discovery and Data Mining
Keywords
DocType
Citations 
Domain-specific pretraining, Search, Biomedical, NLP, COVID-19
Conference
1
PageRank 
References 
Authors
0.35
0
15
Name
Order
Citations
PageRank
Yu Wang12279211.60
Jinchao Li2428.20
Tristan Naumann371.16
Chen-Yan Xiong440530.82
Hao Cheng511.02
Robert Tinn670.82
Cliff Wong731.05
Naoto Usuyama871.16
Richard J Rogahn910.35
Zhihong Shen101497.97
Yang Qin11318.65
Eric Horvitz1294021058.25
Paul N. Bennett13150087.93
Jianfeng Gao145729296.43
Hoifung Poon15115857.54