Title
Trust in AutoML: exploring information needs for establishing trust in automated machine learning systems
Abstract
ABSTRACTWe explore trust in a relatively new area of data science: Automated Machine Learning (AutoML). In AutoML, AI methods are used to generate and optimize machine learning models by automatically engineering features, selecting models, and optimizing hyperparameters. In this paper, we seek to understand what kinds of information influence data scientists' trust in the models produced by AutoML? We operationalize trust as a willingness to deploy a model produced using automated methods. We report results from three studies - qualitative interviews, a controlled experiment, and a card-sorting task - to understand the information needs of data scientists for establishing trust in AutoML systems. We find that including transparency features in an AutoML tool increased user trust and understandability in the tool; and out of all proposed features, model performance metrics and visualizations are the most important information to data scientists when establishing their trust with an AutoML tool.
Year
DOI
Venue
2020
10.1145/3377325.3377501
IUI
DocType
Citations 
PageRank 
Conference
1
0.36
References 
Authors
25
9
Name
Order
Citations
PageRank
Jaimie Drozdal132.87
Justin D. Weisz211119.46
Dakuo Wang37314.74
Gaurav Dass410.36
Bingsheng Yao511.03
Changruo Zhao610.36
Michael J. Muller72310303.58
Lin Ju810.36
Hui Su929333.30