Title
Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies *
Abstract
An important current challenge in Human-Robot Interaction (HRI) is to enable robots to learn on-the-fly from human feedback. However, humans show a great variability in the way they reward robots. We propose to address this issue by enabling the robot to combine different learning strategies, namely model-based (MB) and model-free (MF) reinforcement learning. We simulate two HRI scenarios: a simple task where the human congratulates the robot for putting the right cubes in the right boxes, and a more complicated version of this task where cubes have to be placed in a specific order. We show that our existing MB-MF coordination algorithm previously tested in robot navigation works well here without retuning parameters. It leads to the maximal performance while producing the same minimal computational cost as MF alone. Moreover, the algorithm gives a robust performance no matter the variability of the simulated human feedback, while each strategy alone is impacted by this variability. Overall, the results suggest a promising way to promote robot learning flexibility when facing variable human feedback.
Year
DOI
Venue
2020
10.1109/RO-MAN47096.2020.9223451
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
DocType
ISSN
ISBN
Conference
1944-9445
978-1-7281-6075-7
Citations 
PageRank 
References 
0
0.34
0
Authors
5
Name
Order
Citations
PageRank
Dromnelle Rémi100.34
Girard Benoît218522.20
Renaudo Erwan382.28
Chatila Raja400.34
Khamassi Mehdi511216.51