Title
Cross-context Visual Imitation Learning from Demonstrations.
Abstract
Imitation learning enables robots to learn a task by simply watching the demonstration of the task. Current imitation learning methods usually require the learner and demonstrator to occur in the same context. This limits their scalability to practical applications. In this paper, we propose a more general imitation learning method which allows the learner and the demonstrator to come from different contexts, such as different viewpoints, backgrounds, and object positions and appearances. Specifically, we design a robotic system consisting of three models: context translation model, depth prediction model and multi-modal inverse dynamics model. First, the context translation model translates the demonstration to the context of learner from a different context. Then combining the color observation and depth observation as inputs, the inverse model maps the multi-modal observations into actions to reproduce the demonstration, where the depth observation is provided by a depth prediction model. By performing the block stacking tasks both in simulation and real world, we prove the cross-context learning advantage of the proposed robotic system over other systems.
Year
DOI
Venue
2020
10.1109/ICRA40945.2020.9196868
ICRA
DocType
Volume
Issue
Conference
2020
1
Citations 
PageRank 
References 
0
0.34
33
Authors
5
Name
Order
Citations
PageRank
Shuo Yang121.37
Zhang Wei239253.03
Weizhi Lu3203.33
Wang H446863.98
Yibin Li522659.56