Title
Sequential Robot Imitation Learning From Observations
Abstract
This paper presents a framework to learn the sequential structure in the demonstrations for robot imitation learning. We first present a family of task-parameterized hidden semi-Markov models that extracts invariant segments (also called sub-goals or options) from demonstrated trajectories, and optimally follows the sampled sequence of states from the model with a linear quadratic tracking controller. We then extend the concept to learning invariant segments from visual observations that are sequenced together for robot imitation. We present Motion2Vec that learns a deep embedding space by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while being pushed away from randomly sampled images of other segments, and a time contrastive loss is used to preserve the temporal ordering of the images. The trained embeddings are segmented with a recurrent neural network, and subsequently used for decoding the end-effector pose of the robot. We first show its application to a pick-and-place task with the Baxter robot while avoiding a moving obstacle from four kinesthetic demonstrations only, followed by suturing task imitation from publicly available suturing videos of the JIGSAWS dataset with state-of-the-art 85 . 5 % segmentation accuracy and 0 . 94 cm error in position per observation on the test set.
Year
DOI
Venue
2021
10.1177/02783649211032721
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
Keywords
DocType
Volume
Hidden semi-Markov model, robot learning, imitation learning, learning and adaptive systems
Journal
40
Issue
ISSN
Citations 
10-11
0278-3649
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ajay Kumar Tanwani1669.07
Andy Yan210.70
Jonathan Lee37016.21
Sylvain Calinon41897117.63
Ken Goldberg53785369.80