Title
Effect of Errors on the Evaluation of Machine Learning Systems
Abstract
Information such as accuracy and outcome explanations can be useful for the evaluation of machine learning systems, but they can also lead to over-trust. This means that an evaluator may not have suspicion that a machine learning system could have errors, and that they may overlook problems in the explanation of those systems. Research has shown that errors not only decrease trust but can also promote curiosity about the performance of the system. Therefore, presenting errors to evaluators may be an option to induce suspicion in the context of the evaluation of a machine learning system. In this paper, we evaluate this possibility by conducting three experiments where we asked participants to evaluate text classification systems. We presented two types of errors: incorrect predictions and errors in the explanation. The results show that patterns of errors in explanation negatively influenced willingness to recommend a system, and that fewer participants chose a system with higher accuracy when there was an error pattern, compared to when the errors were random. Moreover, more participants gave evidence from the explanations in their reason for their evaluation of the systems, suggesting that they were able to detect error patterns.
Year
DOI
Venue
2022
10.5220/0010839300003124
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (HUCAPP), VOL 2
Keywords
DocType
ISSN
User Perception, Errors, Machine Learning Model Evaluation, User Study
Conference
2184-4321
Citations 
PageRank 
References 
0
0.34
0
Authors
3
Name
Order
Citations
PageRank
Vanessa Bracamonte101.69
Seira Hidano200.68
Shinsaku Kiyomoto301.35