Title
CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark.
Abstract
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually offering great promise for medical practice. With the development of biomedical language understanding benchmarks, AI applications are widely used in the medical field. However, most benchmarks are limited to English, which makes it challenging to replicate many of the successes in English for other languages. To facilitate research in this direction, we collect real-world biomedical data and present the first Chinese Biomedical Language Understanding Evaluation (CBLUE) benchmark: a collection of natural language understanding tasks including named entity recognition, information extraction, clinical diagnosis normalization, single-sentence/sentence-pair classification, and an associated online platform for model evaluation, comparison, and analysis. To establish evaluation on these tasks, we report empirical results with the current 11 pre-trained Chinese models, and experimental results show that state-of-the-art neural models perform by far worse than the human ceiling.
Year
DOI
Venue
2022
10.18653/v1/2022.acl-long.544
Annual Meeting of the Association for Computational Linguistics
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
23
Name
Order
Citations
PageRank
Ningyu Zhang100.34
Mosha Chen200.34
Zhen Bi303.38
Xiaozhuan Liang401.01
Li, Lei579969.54
Xin Shang600.34
Kangping Yin700.68
Chuanqi Tan8299.25
Jian Xu900.34
Fei Huang1050656.44
Luo Si112498169.52
Fei Huang1227.54
Guotong Xie1321617.31
Zhifang Sui1417239.06
Baobao Chang1500.34
Hui Zong1600.34
Zheng Yuan1700.34
Linfeng Li1801.01
Jun Yan19179885.25
Hongying Zan201519.05
Kunli Zhang2102.37
Buzhou Tang2236834.04
Qingcai Chen2380966.72