Title
Passive Bimanual Skills Learning From Demonstration With Motion Graph Attention Networks
Abstract
Enabling household robots to passively learn task-level skills from human demonstration could substantially boost their application in daily life. In this letter, we propose a Learning from Demonstration (LfD) scheme capturing human uni/bimanual demonstrations with motion capture suit and virtual reality (VR) trackers, wherein the demonstrated skills are transferred to a humanoid with a learnable graph attention network (GAT) based model. The model trained with human hand trajectories and target object poses yield the movement policy as a trajectory generator, which outputs the Cartesian trajectories for robot end-effectors to execute the task given their poses and optionally the object's initial pose as input. Test on synthetic data and three real robot experiments indicated that the policy could learn unimanual and coordinated bimanual, interactive and non-interactive manipulation skills with a unified scheme.
Year
DOI
Venue
2022
10.1109/LRA.2022.3152974
IEEE ROBOTICS AND AUTOMATION LETTERS
Keywords
DocType
Volume
Datasets for human motion, deep learning in grasping andmanipulation, dual arm manipulation, learning from demonstration
Journal
7
Issue
ISSN
Citations 
2
2377-3766
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Zhipeng Dong100.34
Zhihao Li200.34
Yunhui Yan33110.02
Sylvain Calinon41897117.63
Fei Chen53014.06