Title
Stakeholders in Explainable AI.
Abstract
There is general consensus that it is important for artificialintelligence (AI) and machine learning systems to be explainableand/or interpretable. However, there is no generalconsensus over what is meant by ‘explainable’ and ‘interpretable’.In this paper, we argue that this lack of consensusis due to there being several distinct stakeholder communities.We note that, while the concerns of the individualcommunities are broadly compatible, they are not identical,which gives rise to different intents and requirements for explainability/interpretability. We use the software engineeringdistinction between validation and verification, and the epistemologicaldistinctions between knowns/unknowns, to teaseapart the concerns of the stakeholder communities and highlightthe areas where their foci overlap or diverge. It is notthe purpose of the authors of this paper to ‘take sides’ — wecount ourselves as members, to varying degrees, of multiplecommunities — but rather to help disambiguate what stakeholdersmean when they ask ‘Why?’ of an AI.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Data science,Interpretability,Ask price,Stakeholder,Verification and validation,Computer science,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1810.00184
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Alun D. Preece1974112.50
Dan Harborne210.71
Dave Braines36111.18
Richard J. Tomsett4234.85
Supriyo Chakraborty532326.02