Title | ||
---|---|---|
Proactive, incremental learning of gesture-action associations for human-robot collaboration |
Abstract | ||
---|---|---|
Identifying an object of interest, grasping it, and handing it over are key capabilities of collaborative robots. In this context we propose a fast, supervised learning framework for learning associations between human hand gestures and the intended robotic manipulation actions. This framework enables the robot to learn associations on the fly while performing a task with the user. We consider a domestic scenario of assembling a kid's table where the role of the robot is to assist the user. To facilitate the collaboration we incorporate the robot's gaze into the framework. The proposed approach is evaluated in simulation as well as in a real environment. We study the effect of accurate gesture detection on the number of interactions required to complete the task. Moreover, our quantitative analysis shows how purposeful gaze can significantly reduce the amount of time required to achieve the goal. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1109/ROMAN.2017.8172325 | 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) |
Keywords | Field | DocType |
proactive learning,incremental learning,gesture-action associations,human-robot collaboration,collaborative robots,human hand gestures,domestic scenario,robot gaze,gesture detection,kid tabke assembling,robotic manipulation actions,supervised learning framework | Computer vision,Gaze,Computer science,Gesture,Gesture recognition,Supervised learning,Human–computer interaction,Artificial intelligence,Probabilistic logic,Robot,Human–robot interaction,Cognitive neuroscience of visual object recognition | Conference |
ISSN | ISBN | Citations |
1944-9445 | 978-1-5386-3519-3 | 0 |
PageRank | References | Authors |
0.34 | 16 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Dadhichi Shukla | 1 | 21 | 3.11 |
Özgür Erkent | 2 | 26 | 4.96 |
Justus H. Piater | 3 | 543 | 61.56 |