Title
Explaining Explanations in AI.
Abstract
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
Year
DOI
Venue
2019
10.1145/3287560.3287574
FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY
Keywords
DocType
Volume
Interpretability,Explanations,Accountability,Philosophy of Science
Conference
abs/1811.01439
Citations 
PageRank 
References 
13
0.79
20
Authors
3
Name
Order
Citations
PageRank
Brent Mittelstadt1695.38
Chris Russell2113250.95
Sandra Wachter3182.26