Title
Learning Dexterous Grasping with Object-Centric Visual Affordances
Abstract
Dexterous robotic hands are appealing for their agility and human-like morphology, yet their high degree of freedom makes learning to manipulate challenging. We introduce an approach for learning dexterous grasping. Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop to learn grasping policies that favor the same object regions favored by people. Unlike traditional approaches that learn from human demonstration trajectories (e.g., hand joint sequences captured with a glove), the proposed prior is object-centric and image-based, allowing the agent to anticipate useful affordance regions for objects unseen during policy learning. We demonstrate our idea with a 30-DoF five-fingered robotic hand simulator on 40 objects from two datasets, where it successfully and efficiently learns policies for stable functional grasps. Our affordance-guided policies are significantly more effective, generalize better to novel objects, train 3x faster than the baselines, and are more robust to noisy sensor readings and actuation. Our work offers a step towards manipulation agents that learn by watching how people use objects, without requiring state and action information about the human body. Project website with videos: http://vision.cs.utexas.edu/projects/graff-dexterous-affordance-grasp.
Year
DOI
Venue
2021
10.1109/ICRA48506.2021.9561802
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021)
DocType
ISSN
Citations 
Conference
1050-4729
0
PageRank 
References 
Authors
0.34
14
2
Name
Order
Citations
PageRank
Priyanka Mandikal1202.11
Kristen Grauman26258326.34