Title
Sharpening Local Interpretable Model-Agnostic Explanations for Histopathology: Improved Understandability and Reliability
Abstract
Being accountable for the signed reports, pathologists may be wary of high-quality deep learning outcomes if the decision-making is not understandable. Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) is not sufficient to generate stable and understandable explanations. This work improves the application of LIME to histopathology images by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters. This represents a promising step in giving pathologists tools to obtain additional information on image classification models. The code and trained models are available on GitHub.
Year
DOI
Venue
2021
10.1007/978-3-030-87199-4_51
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III
Keywords
DocType
Volume
Histopathology, Interpretable AI, Reliable AI
Conference
12903
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
6