Title
Multi-label Classification for Clinical Text with Feature-level Attention
Abstract
Multi-label text classification, which tags a given plain text with the most relevant labels from a label space, is an important task in the natural language process. To diagnose diseases, clinical researchers use a machine-learning algorithm to do multi-label clinical text classification. However, conventional machine learning methods can neither capture deep semantic information nor the context of words strictly. Diagnostic information from the EHRs (Electronic Health Records) is mainly constructed by unstructured clinical free text which is an obstacle for clinical feature extraction. Moreover, feature engineering is time-consuming and labor-intensive. With the rapid development of deep learning, we apply neural network models to resolve this problem mentioned above. To favor multi-label classification on EHRs, we propose FAMLC-BERT (Feature-level Attention for Multi-label classification on BERT) to capture semantic features from different layers. The model uses feature-level attention with BERT to recognize the labels of EHRs. We empirically compared our model with other state-of-the-art models on real-world documents collected from the hospital. Experiments show that our model achieved significant improvements compared to other selected benchmarks.
Year
DOI
Venue
2020
10.1109/BigDataSecurity-HPSC-IDS49724.2020.00042
2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS)
Keywords
DocType
ISBN
Clinical Text,Multi-label Text Classification,Predicting Diagnosis,Attention,Deep Learning
Conference
978-1-7281-6874-6
Citations 
PageRank 
References 
0
0.34
0
Authors
8
Name
Order
Citations
PageRank
Disheng Pan131.74
Xizi Zheng200.34
Weijie Liu300.34
Mengya Li400.34
Meng Ma57815.71
Ying Zhou600.34
yang li7641.06
Ping Wang89344.15