Assessing plausibility of explanation and meta-explanation in inter-human conflicts

Document Type


Department or Administrative Unit

Computer Science

Publication Date



This paper focuses on explanations in behavioral scenarios that involve conflicting agents. In these scenarios, implicit of or explicit conflict can be caused by contradictory agents' interests, as communicated in their explanations for why they behaved in a particular way, by a lack of knowledge of the situation, or by a mixture of explanations of multiple factors. We argue that in many cases to assess the plausibility of explanations, we must analyze two following components and their interrelations: (1) explanation at the actual object level (explanation itself) and (2) explanation at the higher level (meta-explanation). Comparative analysis of the roles of both is conducted to assess the plausibility of how agents explain the scenarios of their interactions. Object-level explanation assesses the plausibility of individual claims by using a traditional approach to handle argumentative structure of a dialog. Meta-explanation links the structure of a current scenario with that of previously learned scenarios of multi-agent interaction. The scenario structure includes agents' communicative actions and argumentation defeat relations between the subjects of these actions. We build a system where data for both object-level and meta-explanation are visually specified, to assess a plausibility of how agent behavior in a scenario is explained. We verify that meta-explanation in the form of machine learning of scenario structure should be augmented by conventional explanation by finding arguments in the form of defeasibility analysis of individual claims, to increase the accuracy of plausibility assessment. We also define a ratio between object-level and meta-explanation as the relative accuracy of plausibility assessment based on the former and latter sources. We then observe that groups of scenarios can be clustered based on this ratio; hence, such a ratio is an important parameter of human behavior associated with explaining something to other humans.


This article was originally published in Engineering Applications of Artificial Intelligence. The full-text article from the publisher can be found here.

Due to copyright restrictions, this article is not available for free download from ScholarWorks @ CWU.


Engineering Applications of Artificial Intelligence


Copyright © 2011 Elsevier Ltd. All rights reserved.