Title
Towards a Grounded Dialog Model for Explainable Artificial Intelligence.
Abstract
To generate trust with their users, Explainable Artificial Intelligence (XAI) systems need to include an explanation model that can communicate the internal decisions, behaviours and actions to the interacting humans. Successful explanation involves both cognitive and social processes. In this paper we focus on the challenge of meaningful interaction between an explainer and an explainee and investigate the structural aspects of an explanation in order to propose a human explanation dialog model. We follow a bottom-up approach to derive the model by analysing transcripts of 398 different explanation dialog types. We use grounded theory to code and identify key components of which an explanation dialog consists. We carry out further analysis to identify the relationships between components and sequences and cycles that occur in a dialog. We present a generalized state model obtained by the analysis and compare it with an existing conceptual dialog model of explanation.
Year
Venue
Field
2018
arXiv: Artificial Intelligence
Dialog box,Grounded theory,Computer science,State model,Artificial intelligence,Social processes,Cognition
DocType
Volume
Citations 
Journal
abs/1806.08055
0
PageRank 
References 
Authors
0.34
0
4
Name
Order
Citations
PageRank
Prashan Madumal162.09
Tim Miller214213.81
Frank Vetere31805143.63
Liz Sonenberg4802119.89