Title
Learning Manipulation Tasks from Human Demonstration and 3D Shape Segmentation.
Abstract
According to neuro-psychology studies, 3D shape segmentation plays an important role in human perception of objects because when an object is perceived for grasping it is first parsed in its constituent parts. This capability is missing in current robot planning systems, which are therefore hindered in their ability to plan part-specific grasps suitable for the current task. In this paper, a novel approach for part-based grasping is presented that combines 3D shape segmentation, programing by human demonstration and manipulation planning. The central advantage over previous approaches is the use of a topological method for shape segmentation enabling both object categorization and robot grasping according to the affordances of an object. Manipulation tasks are demonstrated in a virtual reality environment using a data glove and a motion tracker, and the specific parts of the objects where grasping occurs are learned and encoded in the task description. Tasks are then planned and executed in a robot environment targeting semantically relevant parts for grasping. Planning in the robot environment can be generalized to objects that are similar to the ones used for task demonstration, i.e. objects that belong to the same category. Results obtained in 3D simulation confirm that the proposed approach finds with less effort grasps appropriate for the requested task. (c) 2012 Taylor & Francis and The Robotics Society of Japan
Year
DOI
Venue
2012
10.1080/01691864.2012.703167
ADVANCED ROBOTICS
Keywords
Field
DocType
programing by demonstration,shape segmentation,virtual reality,manipulation planning,grasping
Categorization,Computer vision,Wired glove,Virtual reality,Segmentation,Computer science,Artificial intelligence,Parsing,Robot,Perception,Affordance
Journal
Volume
Issue
ISSN
26
16
0169-1864
Citations 
PageRank 
References 
2
0.36
19
Authors
2
Name
Order
Citations
PageRank
Jacopo Aleotti125929.76
Stefano Caselli231436.32