Title
Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.
Abstract
•We propose methodologies to enhance the interpretability of a machine learning system.•The approach can yield two levels of interpretability (global and local), allowing us to assess how the system learned task-specific relations and its individual predictions.•Validation on brain tumor segmentation and penumbra estimation in acute stroke.•Based on the evaluated clinical scenarios, the proposed approach allows us to confirm that the machine learning system learns relations coherent with expert knowledge and annotation protocols.
Year
DOI
Venue
2018
10.1016/j.media.2017.12.009
Medical Image Analysis
Keywords
Field
DocType
Interpretability,Machine learning,Representation learning
Voxel,Interpretability,Restricted Boltzmann machine,Pattern recognition,Artificial intelligence,Black box,Random forest,Feature learning,Abstract machine,Mathematics,Machine learning,Computation
Journal
Volume
ISSN
Citations 
44
1361-8415
4
PageRank 
References 
Authors
0.42
38
7
Name
Order
Citations
PageRank
S Pereira12249.88
Raphael Meier230714.51
Richard McKinley373.60
Roland Wiest434422.73
Victor Alves523323.65
Carlos A. Silva622410.89
Mauricio Reyes77313.74