Title
Exploiting the systematic review protocol for classification of medical abstracts.
Abstract
To determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers.The test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload.For the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%.The per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review.
Year
DOI
Venue
2011
10.1016/j.artmed.2010.10.005
Artificial Intelligence In Medicine
Keywords
Field
DocType
automatic classification,human reviewer,medical concepts,ensemble of classifiers,systematic review protocol,text representation,systematic review,global method,highest recall,human expert,human workload,medical abstract,specific protocol,systematic reviews for the medical domain,automatic text classification,per-question method,highest precision
Data mining,Systematic review,Naive Bayes classifier,Computer science,Workload,Precision and recall,Artificial intelligence,Classifier (linguistics),Recall,Workflow,Machine learning,Test set
Journal
Volume
Issue
ISSN
51
1
1873-2860
Citations 
PageRank 
References 
7
0.57
6
Authors
5
Name
Order
Citations
PageRank
Oana Frunza1757.02
Diana Inkpen2105987.92
Stan Matwin33025344.20
William Klement4212.90
Peter O'Blenis5443.47