Title
Learning One-Shot Imitation From Humans Without Humans
Abstract
Humans can naturally learn to execute a new task by seeing it performed by other individuals once, and then reproduce it in a variety of configurations. Endowing robots with this ability of imitating humans from third person is a very immediate and natural way of teaching new tasks. Only recently, through meta-learning, there have been successful attempts to one-shot imitation learning from humans; however, these approaches require a lot of human resources to collect the data in the real world to train the robot. But is there a way to remove the need for real world human demonstrations during training? We show that with Task-Embedded Control Networks, we can infer control polices by embedding human demonstrations that can condition a control policy and achieve one-shot imitation learning. Importantly, we do not use a real human arm to supply demonstrations during training, but instead leverage domain randomisation in an application that has not been seen before: sim-to-real transfer on humans. Upon evaluating our approach on pushing and placing tasks in both simulation and in the real world, we show that in comparison to a system that was trained on real-world data we are able to achieve similar results by utilising only simulation data. Videos can be found here: https://sites.google.com/view/tecnets-humans.
Year
DOI
Venue
2020
10.1109/LRA.2020.2977835
IEEE ROBOTICS AND AUTOMATION LETTERS
Keywords
DocType
Volume
Learning from demonstration, deep learning in robotics and automation, perception for grasping and manipulation
Journal
5
Issue
ISSN
Citations 
2
2377-3766
1
PageRank 
References 
Authors
0.35
0
3
Name
Order
Citations
PageRank
Alessandro Bonardi110.35
Stephen James2586.02
Andrew J. Davison36707350.85