Title
The Impact of POMDP-Generated Explanations on Trust and Performance in Human-Robot Teams.
Abstract
Researchers have observed that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain effective team performance even when the system is less than 100% reliable. However, current explanation algorithms are not sufficient for making a robot's quantitative reasoning (in terms of both uncertainty and conflicting goals) transparent to human teammates. In this work, we develop a novel mechanism for robots to automatically generate explanations of reasoning based on Partially Observable Markov Decision Problems (POMDPs). Within this mechanism, we implement alternate natural-language templates and then measure their differential impact on trust and team performance within an agent-based online test-bed that simulates a human-robot team task. The results demonstrate that the added explanation capability leads to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot interaction.
Year
DOI
Venue
2016
10.5555/2936924.2937071
AAMAS
Keywords
Field
DocType
Human-robot interaction,POMDPs,explainable AI,trust
Transparency (graphic),Decision problem,Partially observable Markov decision process,Computer science,Markov chain,Autonomous system (mathematics),Artificial intelligence,Robot,Human–robot interaction,Machine learning,Qualitative reasoning
Conference
ISBN
Citations 
PageRank 
978-1-4503-4239-1
9
0.68
References 
Authors
19
3
Name
Order
Citations
PageRank
Ning Wang1345.05
David V. Pynadath21556130.56
Susan G. Hill3385.73