Title
Designing Theory-Driven User-Centric Explainable AI.
Abstract
From healthcare to criminal justice, artificial intelligence (AI) is increasingly supporting high-consequence human decisions. This has spurred the field of explainable AI (XAI). This paper seeks to strengthen empirical application-specific investigations of XAI by exploring theoretical underpinnings of human decision making, drawing from the fields of philosophy and psychology. In this paper, we propose a conceptual framework for building human-centered, decision-theory-driven XAI based on an extensive review across these fields. Drawing on this framework, we identify pathways along which human cognitive patterns drives needs for building XAI and how XAI can mitigate common cognitive biases. We then put this framework into practice by designing and implementing an explainable clinical diagnostic tool for intensive care phenotyping and conducting a co-design exercise with clinicians. Thereafter, we draw insights into how this framework bridges algorithm-generated explanations and human decision-making theories. Finally, we discuss implications for XAI design and development.
Year
DOI
Venue
2019
10.1145/3290605.3300831
CHI
Keywords
Field
DocType
clinical decision making, decision making, explainable artificial intelligence, explanations, intelligibility
Health care,Cognitive bias,Data science,Computer science,Human–computer interaction,Philosophy of psychology,Intensive care,Conceptual framework,Criminal justice,User-centered design,Intelligibility (communication)
Conference
ISBN
Citations 
PageRank 
978-1-4503-5970-2
12
0.47
References 
Authors
0
4
Name
Order
Citations
PageRank
Danding Wang1382.79
Qian Yang2615.81
Ashraf M. Abdul3403.82
Brian Y. Lim432723.95