Title
Assessing and comparing interpretability techniques for artificial neural networks breast cancer classication
Abstract
Breast cancer is the most common type of cancer among women. Thankfully, early detection and treatment improvements helped decrease the number of deaths. Data Mining techniques have always assisted BC tasks whether it is screening, diagnosis, prognosis, treatment, monitoring, and/or management. Nowadays, the use of Data Mining is witnessing a new era. In fact, the main objective is no longer to replace humans but to enhance their capabilities, which is why Artificial Intelligence is now referred to as Intelligence Augmentation. In this context, interpretability is used to help domain experts learn new patterns and machine learning experts debug their models. This paper aims to investigate three black-boxes interpretation techniques: Feature Importance, Partial Dependence Plot, and LIME when applied to two types of feed-forward Artificial Neural Networks: Multilayer perceptrons, and Radial Basis Function Network, trained on the Wisconsin Original dataset for breast cancer diagnosis. Results showed that local LIME explanations were instance-level interpretations that came in line with the global interpretations of the other two techniques. Global/local interpretability techniques can thus be combined to define the trustworthiness of a black-box model.
Year
DOI
Venue
2021
10.1080/21681163.2021.1901784
COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION
Keywords
DocType
Volume
Interpretability, explainability, breast cancer, diagnosis, LIME, Partial Dependence Plot, features importance
Journal
9
Issue
ISSN
Citations 
6
2168-1163
0
PageRank 
References 
Authors
0.34
0
3
Name
Order
Citations
PageRank
Hajar Hakkoum100.34
Ali Idri201.01
Ibtissam Abnane300.34