Title
How do visual explanations foster end users' appropriate trust in machine learning?
Abstract
ABSTRACTWe investigated the effects of example-based explanations for a machine learning classifier on end users' appropriate trust. We explored the effects of spatial layout and visual representation in an in-person user study with 33 participants. We measured participants' appropriate trust in the classifier, quantified the effects of different spatial layouts and visual representations, and observed changes in users' trust over time. The results show that each explanation improved users' trust in the classifier, and the combination of explanation, human, and classification algorithm yielded much better decisions than the human and classification algorithm separately. Yet these visual explanations lead to different levels of trust and may cause inappropriate trust if an explanation is difficult to understand. Visual representation and performance feedback strongly affect users' trust, and spatial layout shows a moderate effect. Our results do not support that individual differences (e.g., propensity to trust) affect users' trust in the classifier. This work advances the state-of-the-art in trust-able machine learning and informs the design and appropriate use of automated systems.
Year
DOI
Venue
2020
10.1145/3377325.3377480
IUI
DocType
Citations 
PageRank 
Conference
1
0.35
References 
Authors
0
4
Name
Order
Citations
PageRank
Fumeng Yang1654.93
Zhuanyi Huang221.37
Jean Scholtz3233.82
Dustin L. Arendt420.69