Title
What Information is Required for Explainable AI? : A Provenance-based Research Agenda and Future Challenges
Abstract
Deriving explanations of an Artificial Intelligence-based system's decision making is becoming increasingly essential to address requirements that meet quality standards and operate in a transparent, comprehensive, understandable, and explainable manner. Furthermore, more security issues as well as concerns from human perspectives emerge in describing the explainability properties of AI. A full system view is required to enable humans to properly estimate risks when dealing with such systems. This paper introduces open issues in this research area to present the overall picture of explainability and the required information needed for the explanation to make a decision-oriented AI system transparent to humans. It illustrates the potential contribution of proper provenance data to AI-based systems by describing a provenance graph-based design. This paper proposes a six-Ws framework to demonstrate how a security-aware provenance graph-based design can build the basis for providing end-users with sufficient meta-information on AI-based decision systems. An example scenario is then presented that highlights the required information for better explainability both from human and security-aware aspects. Finally, associated challenges are discussed to provoke further research and commentary.
Year
DOI
Venue
2020
10.1109/CIC50333.2020.00030
2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)
Keywords
DocType
ISBN
artificial intelligence,data provenance,explainable AI,decision-oriented systems,cybersecurity,human-centric policy
Conference
978-1-7281-8542-2
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Fariha Tasmin Jaigirdar101.35
Carsten Rudolph242.78
Gillian Oliver331.04
David Watts400.34
Chris Bain500.34