Title
Explainable Robotic Systems.
Abstract
The increasing complexity of robotic systems are pressing the need for them to be transparent and trustworthy. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, people»s mental models of robotic systems stem from their interactions with living beings, which induces the risk of establishing incorrect or inadequate mental models of robotic systems and may lead people to either under- and over-trust these systems. We need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems that support the formation of correct mental models of and fosters trust calibration. This way, people will be better able to predict the intentions of these systems, and thus more accurately estimate their capabilities, better understand their actions, and potentially correct their errors. The aim of this full-day workshop is to provide a forum for researchers and practitioners to share and learn about recent research on people»s inferences of robot actions, as well as the implementation of transparent, predictable, and explainable behaviors into robotic systems.
Year
DOI
Venue
2018
10.1145/3173386.3173568
HRI (Companion)
Keywords
DocType
ISSN
Explainable robotics, behavior explanation, theory of mind, intentionality, transparency, trust calibration.
Conference
2167-2121
ISBN
Citations 
PageRank 
978-1-4503-5615-2
1
0.36
References 
Authors
8
4
Name
Order
Citations
PageRank
Maartje M. A. de Graaf1102.57
Bertram F. Malle26113.44
Anca D. Dragan352948.64
Tom Ziemke468167.03