Abstract | ||
---|---|---|
We consider the problem of visual imitation learning without human supervision (e.g. kinesthetic teaching or teleoperation), nor access to an interactive reinforcement learning (RL) training environment. We present a geometric perspective to derive solutions to this problem. Specifically, we propose VGS-IL (Visual Geometric Skill Imitation Learning), an end-to-end geometry-parameterized task concept inference method, to infer globally consistent geometric feature association rules from human demonstration video frames. We show that, instead of learning actions from image pixels, learning a geometry-parameterized task concept provides an explainable and invariant representation across demonstrator to imitator under various environmental settings. Moreover, such a task concept representation provides a direct link with geometric vision based controllers (e.g. visual servoing), allowing for efficient mapping of high-level task concepts to low-level robot actions. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/IROS45743.2020.9341758 | IROS |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
21 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jin Jun | 1 | 2 | 4.82 |
Laura Petrich | 2 | 8 | 2.83 |
Dehghan Masood | 3 | 0 | 1.01 |
Martin Jägersand | 4 | 334 | 43.10 |