Assessing plausibility of explanation and meta-explanation in inter-human conflicts
Department or Administrative Unit
This paper focuses on explanations in behavioral scenarios that involve conflicting agents. In these scenarios, implicit of or explicit conflict can be caused by contradictory agents' interests, as communicated in their explanations for why they behaved in a particular way, by a lack of knowledge of the situation, or by a mixture of explanations of multiple factors. We argue that in many cases to assess the plausibility of explanations, we must analyze two following components and their interrelations: (1) explanation at the actual object level (explanation itself) and (2) explanation at the higher level (meta-explanation). Comparative analysis of the roles of both is conducted to assess the plausibility of how agents explain the scenarios of their interactions. Object-level explanation assesses the plausibility of individual claims by using a traditional approach to handle argumentative structure of a dialog. Meta-explanation links the structure of a current scenario with that of previously learned scenarios of multi-agent interaction. The scenario structure includes agents' communicative actions and argumentation defeat relations between the subjects of these actions. We build a system where data for both object-level and meta-explanation are visually specified, to assess a plausibility of how agent behavior in a scenario is explained. We verify that meta-explanation in the form of machine learning of scenario structure should be augmented by conventional explanation by finding arguments in the form of defeasibility analysis of individual claims, to increase the accuracy of plausibility assessment. We also define a ratio between object-level and meta-explanation as the relative accuracy of plausibility assessment based on the former and latter sources. We then observe that groups of scenarios can be clustered based on this ratio; hence, such a ratio is an important parameter of human behavior associated with explaining something to other humans.
Galitsky, B. A., Kovalerchuk, B., & de la Rosa, J. L. (2011). Assessing plausibility of explanation and meta-explanation in inter-human conflicts. Engineering Applications of Artificial Intelligence, 24(8), 1472–1486. https://doi.org/10.1016/j.engappai.2011.02.006
Engineering Applications of Artificial Intelligence
Copyright © 2011 Elsevier Ltd. All rights reserved.