Title
Manipulating and Measuring Model Interpretability
Abstract
ABSTRACT With machine learning models being increasingly used to aid decision making even in high-stakes domains, there has been a growing interest in developing interpretable models. Although many supposedly interpretable models have been proposed, there have been relatively few experimental studies investigating whether these models achieve their intended effects, such as making people more closely follow a model’s predictions when it is beneficial for them to do so or enabling them to detect when a model has made a mistake. We present a sequence of pre-registered experiments (N = 3, 800) in which we showed participants functionally identical models that varied only in two factors commonly thought to make machine learning models more or less interpretable: the number of features and the transparency of the model (i.e., whether the model internals are clear or black box). Predictably, participants who saw a clear model with few features could better simulate the model’s predictions. However, we did not find that participants more closely followed its predictions. Furthermore, showing participants a clear model meant that they were less able to detect and correct for the model’s sizable mistakes, seemingly due to information overload. These counterintuitive findings emphasize the importance of testing over intuition when developing interpretable models.
Year
DOI
Venue
2018
10.1145/3411764.3445315
CHI
Keywords
Field
DocType
interpretability, machine-assisted decision making, human-centered machine learning
Small number,Transparency (graphic),Interpretability,Data mining,End user,Randomized experiment,Computer science,Artificial intelligence,Empirical research,Machine learning
Journal
Volume
Citations 
PageRank 
abs/1802.07810
10
0.47
References 
Authors
0
5
Name
Order
Citations
PageRank
Forough Poursabzi-Sangdeh1100.47
Daniel G. Goldstein219412.27
Jake Hofman3136464.87
Jennifer Wortman Vaughan492942.23
Wallach Hanna5144577.79