Title
Who needs to know what, when?: Broadening the Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle
Abstract
The interpretability or explainability of AI systems (XAI) has been a topic gaining renewed attention in recent years across AI and HCI communities. Recent work has drawn attention to the emergent explainability requirements of in situ, applied projects, yet further exploratory work is needed to more fully understand this space. This paper investigates applied AI projects and reports on a qualitative interview study of individuals working on AI projects at a large technology and consulting company. Presenting an empirical understanding of the range of stakeholders in industrial AI projects, this paper also draws out the emergent explainability practices that arise as these projects unfold, highlighting the range of explanation audiences (who), as well as how their explainability needs evolve across the AI project lifecycle (when). We discuss the importance of adopting a sociotechnical lens in designing AI systems, noting how the "AI lifecycle" can serve as a design metaphor to further the XAI design field.
Year
DOI
Venue
2021
10.1145/3461778.3462131
PROCEEDINGS OF THE 2021 ACM DESIGNING INTERACTIVE SYSTEMS CONFERENCE (DIS 2021)
Keywords
DocType
Citations 
Explainable AI, Interviews, Work Practices
Conference
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Shipi Dhanorkar100.68
Christine T. Wolf204.73
Kun Qian301.01
Anbang Xu435130.52
Ling-ling Yan5127370.78
Yunyao Li653037.81