Title
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
Abstract
Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research.
Year
DOI
Venue
2020
10.1016/j.patter.2020.100049
Patterns
Keywords
DocType
Volume
DSML 1: Concept: Basic principles of a new data science output observed and reported
Journal
1
Issue
ISSN
Citations 
4
2666-3899
2
PageRank 
References 
Authors
0.64
0
8
Name
Order
Citations
PageRank
Richard J. Tomsett1234.85
Alun D. Preece2974112.50
Dave Braines36111.18
Federico Cerutti423331.66
Supriyo Chakraborty532326.02
Mani Srivastava620.64
Gavin Pearson720.64
Lance M. Kaplan876981.55