Title
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems.
Abstract
Several researchers have argued that a machinelearning system’s interpretability should be definedin relation to a specific agent or task: weshould not ask if the system is interpretable, butto whom is it interpretable. We describe a modelintended to help answer this question, by identifyingdifferent roles that agents can fulfill in relationto the machine learning system. We illustrate theuse of our model in a variety of scenarios, exploringhow an agent’s role influences its goals,and the implications for defining interpretability.Finally, we make suggestions for how our modelcould be useful to interpretability researchers, systemdevelopers, and regulatory bodies auditingmachine learning systems.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Interpretability,Audit,Ask price,Computer science,Artificial intelligence,Machine learning
DocType
Volume
Citations 
Journal
abs/1806.07552
1
PageRank 
References 
Authors
0.37
13
5
Name
Order
Citations
PageRank
Richard J. Tomsett1234.85
Dave Braines26111.18
Dan Harborne310.71
Alun D. Preece4974112.50
Supriyo Chakraborty532326.02