Title | ||
---|---|---|
Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. |
Abstract | ||
---|---|---|
•We propose methodologies to enhance the interpretability of a machine learning system.•The approach can yield two levels of interpretability (global and local), allowing us to assess how the system learned task-specific relations and its individual predictions.•Validation on brain tumor segmentation and penumbra estimation in acute stroke.•Based on the evaluated clinical scenarios, the proposed approach allows us to confirm that the machine learning system learns relations coherent with expert knowledge and annotation protocols. |
Year | DOI | Venue |
---|---|---|
2018 | 10.1016/j.media.2017.12.009 | Medical Image Analysis |
Keywords | Field | DocType |
Interpretability,Machine learning,Representation learning | Voxel,Interpretability,Restricted Boltzmann machine,Pattern recognition,Artificial intelligence,Black box,Random forest,Feature learning,Abstract machine,Mathematics,Machine learning,Computation | Journal |
Volume | ISSN | Citations |
44 | 1361-8415 | 4 |
PageRank | References | Authors |
0.42 | 38 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
S Pereira | 1 | 224 | 9.88 |
Raphael Meier | 2 | 307 | 14.51 |
Richard McKinley | 3 | 7 | 3.60 |
Roland Wiest | 4 | 344 | 22.73 |
Victor Alves | 5 | 233 | 23.65 |
Carlos A. Silva | 6 | 224 | 10.89 |
Mauricio Reyes | 7 | 73 | 13.74 |