Title
Assessing plausibility of explanation and meta-explanation in inter-human conflicts
Abstract
This paper focuses on explanations in behavioral scenarios that involve conflicting agents. In these scenarios, implicit of or explicit conflict can be caused by contradictory agents' interests, as communicated in their explanations for why they behaved in a particular way, by a lack of knowledge of the situation, or by a mixture of explanations of multiple factors. We argue that in many cases to assess the plausibility of explanations, we must analyze two following components and their interrelations: (1) explanation at the actual object level (explanation itself) and (2) explanation at the higher level (meta-explanation). Comparative analysis of the roles of both is conducted to assess the plausibility of how agents explain the scenarios of their interactions. Object-level explanation assesses the plausibility of individual claims by using a traditional approach to handle argumentative structure of a dialog. Meta-explanation links the structure of a current scenario with that of previously learned scenarios of multi-agent interaction. The scenario structure includes agents' communicative actions and argumentation defeat relations between the subjects of these actions. We build a system where data for both object-level and meta-explanation are visually specified, to assess a plausibility of how agent behavior in a scenario is explained. We verify that meta-explanation in the form of machine learning of scenario structure should be augmented by conventional explanation by finding arguments in the form of defeasibility analysis of individual claims, to increase the accuracy of plausibility assessment. We also define a ratio between object-level and meta-explanation as the relative accuracy of plausibility assessment based on the former and latter sources. We then observe that groups of scenarios can be clustered based on this ratio; hence, such a ratio is an important parameter of human behavior associated with explaining something to other humans.
Year
DOI
Venue
2011
10.1016/j.engappai.2011.02.006
Eng. Appl. of AI
Keywords
Field
DocType
human behavior,machine learning,comparative analysis
Dialog box,Argumentative,Computer science,Argumentation theory,Defeasible reasoning,Agent behavior,Artificial intelligence,Machine learning
Journal
Volume
Issue
ISSN
24
8
0952-1976
Citations 
PageRank 
References 
3
0.42
27
Authors
3
Name
Order
Citations
PageRank
Boris Galitsky124837.81
Boris Kovalerchuk223550.77
Josep Lluís De La Rosa326041.38