Title
Trust Calibration within a Human-Robot Team: Comparing Automatically Generated Explanations.
Abstract
Trust is a critical factor for achieving the full potential of human-robot teams. Researchers have theorized that people will more accurately trust an autonomous system, such as a robot, if they have a more accurate understanding of its decision-making process. Studies have shown that hand-crafted explanations can help maintain trust when the system is less than 100% reliable. In this work, we leverage existing agent algorithms to provide a domain-independent mechanism for robots to automatically generate such explanations. To measure the explanation mechanism's impact on trust, we collected self-reported survey data and behavioral data in an agent-based online testbed that simulates a human-robot team task. The results demonstrate that the added explanation capability led to improvement in transparency, trust, and team performance. Furthermore, by observing the different outcomes due to variations in the robot's explanation content, we gain valuable insight that can help lead to refinement of explanation algorithms to further improve human-robot trust calibration.
Year
DOI
Venue
2016
10.1109/HRI.2016.7451741
HRI
Keywords
Field
DocType
human-robot team,comparing automatically generated explanations,autonomous system,decision-making process,hand crafted explanations,domain independent mechanism,behavioral data,robot explanation content,human-robot trust calibration
Survey data collection,Transparency (graphic),Leverage (finance),Partially observable Markov decision process,Computer science,Simulation,Testbed,Human–computer interaction,Autonomous system (mathematics),Robot,Human–robot interaction
Conference
ISSN
ISBN
Citations 
2167-2121
978-1-4673-8370-7
15
PageRank 
References 
Authors
0.73
25
3
Name
Order
Citations
PageRank
Ning Wang1345.05
David V. Pynadath21556130.56
Susan G. Hill3385.73