Title | ||
---|---|---|
DisCERN: Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods |
Abstract | ||
---|---|---|
Counterfactual explanations focus on "actionable knowledge" to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes needed to action an output change in the decision is an interesting challenge for counterfactual explainers. The DisCERN algorithm introduced in this paper is a case-based counter-factual explainer. Here counterfactuals are formed by replacing feature values from a nearest unlike neighbour (NUN) until an actionable change is observed. We show how widely adopted feature relevance-based explainers (i.e. LIME, SHAP), can inform DisCERN to identify the minimum subset of "actionable features". We demonstrate our DisCERN algorithm on five datasets in a comparative study with the widely used optimisation-based counterfactual approach DiCE. Our results demonstrate that DisCERN outperformed DiCE by minimising both the number of feature changes and the amount of change necessary to create good counterfactual explanations. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/ICTAI52525.2021.00233 | 2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021) |
Keywords | DocType | ISSN |
Explainable AI, Counterfactuals, Case-based Reasoning | Conference | 1082-3409 |
Citations | PageRank | References |
0 | 0.34 | 0 |
Authors | ||
6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Nirmalie Wiratunga | 1 | 472 | 45.76 |
Anjana Wijekoon | 2 | 0 | 5.07 |
Ikechukwu Nkisi-Orji | 3 | 0 | 1.69 |
Kyle Martin | 4 | 0 | 1.01 |
Chamath Palihawadana | 5 | 0 | 0.34 |
David Corsar | 6 | 0 | 1.35 |