Title | ||
---|---|---|
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. |
Abstract | ||
---|---|---|
Several researchers have argued that a machinelearning system’s interpretability should be definedin relation to a specific agent or task: weshould not ask if the system is interpretable, butto whom is it interpretable. We describe a modelintended to help answer this question, by identifyingdifferent roles that agents can fulfill in relationto the machine learning system. We illustrate theuse of our model in a variety of scenarios, exploringhow an agent’s role influences its goals,and the implications for defining interpretability.Finally, we make suggestions for how our modelcould be useful to interpretability researchers, systemdevelopers, and regulatory bodies auditingmachine learning systems. |
Year | Venue | Field |
---|---|---|
2018 | arXiv: Artificial Intelligence | Interpretability,Audit,Ask price,Computer science,Artificial intelligence,Machine learning |
DocType | Volume | Citations |
Journal | abs/1806.07552 | 1 |
PageRank | References | Authors |
0.37 | 13 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Richard J. Tomsett | 1 | 23 | 4.85 |
Dave Braines | 2 | 61 | 11.18 |
Dan Harborne | 3 | 1 | 0.71 |
Alun D. Preece | 4 | 974 | 112.50 |
Supriyo Chakraborty | 5 | 323 | 26.02 |