Title
Learning Human Priors For Task-Constrained Grasping
Abstract
An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.
Year
DOI
Venue
2015
10.1007/978-3-319-20904-3_20
COMPUTER VISION SYSTEMS (ICVS 2015)
Field
DocType
Volume
Computer vision,Autonomous agent,GRASP,Computer science,Artificial intelligence,Robot,Point cloud,Prior probability,Machine learning,Color quantization,Feature learning,Kernel density estimation
Conference
9163
ISSN
Citations 
PageRank 
0302-9743
2
0.40
References 
Authors
11
4
Name
Order
Citations
PageRank
Martin Hjelm171.51
carl henrik ek232730.76
Renaud Detry318313.94
Danica Kragic42070142.17