Title | ||
---|---|---|
A system for learning continuous human-robot interactions from human-human demonstrations. |
Abstract | ||
---|---|---|
We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robotu0027s motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/ICRA.2017.7989334 | ICRA |
Field | DocType | Volume |
Robot learning,Motion capture,Interaction model,Control engineering,Human interaction,Human–computer interaction,Artificial intelligence,Human–robot interaction,Computer vision,Robot kinematics,Engineering,Robot,Hidden Markov model | Conference | 2017 |
Issue | Citations | PageRank |
1 | 2 | 0.41 |
References | Authors | |
14 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
David Vogt | 1 | 98 | 7.08 |
Simon Stepputtis | 2 | 4 | 1.78 |
Steve Grehl | 3 | 6 | 1.91 |
Bernhard Jung | 4 | 253 | 37.74 |
Heni Ben Amor | 5 | 359 | 35.77 |