Title
Improving Human-Robot Interaction Through Explainable Reinforcement Learning.
Abstract
Gathering the most informative data from humans without overloading them remains an active research area in AI, and is closely coupled with the problems of determining how and when information should be communicated to others [12]. Current decision support systems (DSS) are still overly simple and static, and cannot adapt to changing environments we expect to deploy in modern systems [3], [4], [9], [11]. They are intrinsically limited in their ability to explain rationale versus merely listing their future behaviors, limiting a human's understanding of the system [2], [7]. Most probabilistic assessments of a task are conveyed after the task/skill is attempted rather than before [10], [14], [16]. This limits failure recovery and danger avoidance mechanisms. Existing work on predicting failures relies on sensors to accurately detect explicitly annotated and learned failure modes [13]. As such, important non-obvious pieces of information for assessing appropriate trust and/or course-of-action (COA) evaluation in collaborative scenarios can go overlooked, while irrelevant information may instead be provided that increases clutter and mental workload. Understanding how AI models arrive at specific decisions is a key principle of trust [8]. Therefore, it is critically important to develop new strategies for anticipating, communicating, and explaining justifications and rationale for AI driven behaviors via contextually appropriate semantics.
Year
DOI
Venue
2019
10.1109/HRI.2019.8673198
HRI
Field
DocType
ISSN
Task analysis,Computer science,Workload,Decision support system,Human–computer interaction,Probabilistic logic,Maintenance engineering,Semantics,Human–robot interaction,Reinforcement learning
Conference
2167-2121
ISBN
Citations 
PageRank 
978-1-5386-8555-6
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Aaquib Tabrez111.71
Bradley Hayes211.04