Title
Towards markerless visual servoing of grasping tasks for humanoid robots.
Abstract
Vision-based grasping for humanoid robots is a challenging problem due to a multitude of factors. First, humanoid robots use an “eye-to-hand” kinematics configuration that, on the contrary to the more common “eye-in-hand” configuration, demands a precise estimate of the position of the robotu0027s hand. Second, humanoid robots have a long kinematic chain from the eyes to the hands, prone to accumulate the calibration errors of the kinematics model, which offsets the measured hand-to-object relative pose from the real one. In this paper, we propose a method able to solve these two issues jointly. A robust pose estimation of the robotu0027s hand is achieved via a 3D model-based stereo-vision algorithm, using an edge-based distance transform metric and synthetically generated images of a robotu0027s arm-hand internal computer-graphics model (kinematics and appearance). Then, a particle-based optimization method adapts on-line the robotu0027s internal model to match the real and the synthetically generated images, effectively compensating the kinematics calibration errors. We evaluate the proposed approach using a position-based visual-servoing method on the iCub robot, showing the importance of the continuous visual feedback in humanoid grasping tasks.
Year
DOI
Venue
2017
10.1109/ICRA.2017.7989441
ICRA
Field
DocType
Volume
Robot control,Computer vision,iCub,Kinematics,Robot kinematics,Pose,Visual servoing,Artificial intelligence,Engineering,Robot,Humanoid robot
Conference
2017
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
19
3
Name
Order
Citations
PageRank
Vicente, P.1155.46
Lorenzo Jamone214920.57
Alexandre Bernardino371078.77