Title
How to Choose an Explainability Method? Towards a Methodical Implementation of XAI in Practice
Abstract
Explainability is becoming an important requirement for organizations that make use of automated decision-making due to regulatory initiatives and a shift in public awareness. Various and significantly different algorithmic methods to provide this explainability have been introduced in the field, but the existing literature in the machine learning community has paid little attention to the stakeholder whose needs are rather studied in the human-computer interface community. Therefore, organizations that want or need to provide this explainability are confronted with the selection of an appropriate method for their use case. In this paper, we argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods. We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders. In particular, our contributions include documents used to characterize XAI methods and user requirements (shown in Appendix), which our methodology builds upon.
Year
DOI
Venue
2021
10.1007/978-3-030-93736-2_39
MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I
Keywords
DocType
Volume
Explainable artificial intelligence, Interpretable machine learning, Stakeholder needs, Methodology
Conference
1524
ISSN
Citations 
PageRank 
1865-0929
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Tom Vermeire100.34
Thibault Laugel292.87
Xavier Renard392.54
David Martens400.34
Marcin Detyniecki500.34