Title
Unifying scene registration and trajectory optimization for learning from demonstrations with application to manipulation of deformable objects
Abstract
Recent work [1], [2] has shown promising results in enabling robotic manipulation of deformable objects through learning from demonstrations. Their method computes a registration from training scene to test scene, and then applies an extrapolation of this registration to the training scene gripper motion to obtain the gripper motion for the test scene. The warping cost of scene-to-scene registrations is used to determine the nearest neighbor from a set of training demonstrations. Then once the gripper motion has been generalized to the test situation, they apply trajectory optimization [3] to plan for the robot motions that will track the predicted gripper motions. In many situations, however, the predicted gripper motions cannot be followed perfectly due to, for example, joint limits or obstacles. In this case the past work finds a path that minimizes deviation from the predicted gripper trajectory as measured by its Euclidean distance for position and angular distance for orientation. Measuring the error this way during the motion planning phase, however, ignores the underlying structure of the problem-namely the idea that rigid registrations are preferred to generalize from training scene to test scene. Deviating from the gripper trajectory predicted by the extrapolated registration effectively changes the warp induced by the registration in the part of the space where the gripper trajectories are. The main contribution of this paper is an algorithm that considers this effective final warp as the criterion to optimize for in a unified optimization that simultaneously considers the scene-to-scene warping and the robot trajectory (which were separated into two sequential steps by the past work). This results in an approach that adjusts to infeasibility in a way that adapts directly to the geometry of the scene and minimizes the introduction of additional warping cost. In addition, this paper proposes to learn the motion of the gripper pads, whereas past work consi- ered the motion of a coordinate frame attached to the gripper as a whole. This enables learning more precise grasping motions. Our experiments, which consider the task of knot tying, show that both unified optimization and explicit consideration of gripper pad motion result in improved performance.
Year
DOI
Venue
2014
10.1109/IROS.2014.6943185
Intelligent Robots and Systems
Keywords
Field
DocType
control engineering computing,extrapolation,grippers,image registration,learning by example,optimisation,robot vision,trajectory control,Euclidean distance,angular distance,deformable objects manipulation,extrapolation,grasping motions,learning from demonstrations,motion planning,position,predicted gripper motions,predicted gripper trajectory,robot motions,robot trajectory,robotic manipulation,scene-to-scene registrations,scene-to-scene warping,test scene,training scene registration,trajectory optimization,unified optimization,warping cost
Motion planning,k-nearest neighbors algorithm,Computer vision,Image warping,Trajectory optimization,Computer science,Euclidean distance,Artificial intelligence,Angular distance,Robot,Trajectory
Conference
ISSN
Citations 
PageRank 
2153-0858
5
0.42
References 
Authors
12
6
Name
Order
Citations
PageRank
Alex Lee134113.46
Sandy H. Huang2674.65
Dylan Hadfield-Menell3569.05
Eric Tzeng41766110.04
Pieter Abbeel56363376.48
Hadfield-Menell, D.650.42