Title
Assessment of commercial NLP engines for medication information extraction from dictated clinical notes.
Abstract
We assessed the current state of commercial natural language processing (NLP) engines for their ability to extract medication information from textual clinical documents.Two thousand de-identified discharge summaries and family practice notes were submitted to four commercial NLP engines with the request to extract all medication information. The four sets of returned results were combined to create a comparison standard which was validated against a manual, physician-derived gold standard created from a subset of 100 reports. Once validated, the individual vendor results for medication names, strengths, route, and frequency were compared against this automated standard with precision, recall, and F measures calculated.Compared with the manual, physician-derived gold standard, the automated standard was successful at accurately capturing medication names (F measure=93.2%), but performed less well with strength (85.3%) and route (80.3%), and relatively poorly with dosing frequency (48.3%). Moderate variability was seen in the strengths of the four vendors. The vendors performed better with the structured discharge summaries than with the clinic notes in an analysis comparing the two document types.Although automated extraction may serve as the foundation for a manual review process, it is not ready to automate medication lists without human intervention.
Year
DOI
Venue
2009
10.1016/j.ijmedinf.2008.08.006
International Journal of Medical Informatics
Keywords
Field
DocType
Natural language processing (NLP),Medication extraction,Text mining
Data mining,Text mining,Information retrieval,Computer science,Vendor,Automation,Information extraction,Artificial intelligence,Natural language processing,Gold standard,Recall,Dosing Frequency
Journal
Volume
Issue
ISSN
78
4
1386-5056
Citations 
PageRank 
References 
19
1.89
9
Authors
7