Abstract | ||
---|---|---|
There is general consensus that it is important for artificialintelligence (AI) and machine learning systems to be explainableand/or interpretable. However, there is no generalconsensus over what is meant by ‘explainable’ and ‘interpretable’.In this paper, we argue that this lack of consensusis due to there being several distinct stakeholder communities.We note that, while the concerns of the individualcommunities are broadly compatible, they are not identical,which gives rise to different intents and requirements for explainability/interpretability. We use the software engineeringdistinction between validation and verification, and the epistemologicaldistinctions between knowns/unknowns, to teaseapart the concerns of the stakeholder communities and highlightthe areas where their foci overlap or diverge. It is notthe purpose of the authors of this paper to ‘take sides’ — wecount ourselves as members, to varying degrees, of multiplecommunities — but rather to help disambiguate what stakeholdersmean when they ask ‘Why?’ of an AI. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Artificial Intelligence | Data science,Interpretability,Ask price,Stakeholder,Verification and validation,Computer science,Artificial intelligence,Machine learning |
DocType | Volume | Citations |
Journal | abs/1810.00184 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Alun D. Preece | 1 | 974 | 112.50 |
Dan Harborne | 2 | 1 | 0.71 |
Dave Braines | 3 | 61 | 11.18 |
Richard J. Tomsett | 4 | 23 | 4.85 |
Supriyo Chakraborty | 5 | 323 | 26.02 |