Abstract | ||
---|---|---|
We describe a mobile manipulation hardware and software system capable of autonomously performing complex human-level tasks in real homes, after being taught the task with a single demonstration from a person in virtual reality. This is enabled by a highly capable mobile manipulation robot, whole-body task space hybrid position/force control, teaching of parameterized primitives linked to a robust learned dense visual embeddings representation of the scene, and a task graph of the taught behaviors. We demonstrate the robustness of the approach by presenting results for performing a variety of tasks, under different environmental conditions, in multiple real homes. Our approach achieves 85% overall success rate on three tasks that consist of an average of 45 behaviors each. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/ICRA40945.2020.9196677 | ICRA |
DocType | Volume | Issue |
Conference | 2020 | 1 |
Citations | PageRank | References |
0 | 0.34 | 6 |
Authors | ||
13 |
Name | Order | Citations | PageRank |
---|---|---|---|
Max Bajracharya | 1 | 224 | 18.15 |
James Borders | 2 | 24 | 1.76 |
Helmick Dan | 3 | 0 | 0.34 |
thomas kollar | 4 | 580 | 32.64 |
Michael Laskey | 5 | 90 | 11.35 |
John Leichty | 6 | 37 | 2.75 |
Jeremy Ma | 7 | 181 | 9.93 |
Nagarajan Umashankar | 8 | 0 | 0.34 |
Akiyoshi Ochiai | 9 | 10 | 4.42 |
Petersen Josh | 10 | 0 | 0.34 |
Krishna Shankar | 11 | 26 | 3.56 |
Stone Kevin | 12 | 0 | 1.69 |
Takaoka Yutaka | 13 | 0 | 0.34 |