Title
Deep learning for detecting robotic grasps
Abstract
We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.
Year
DOI
Venue
2013
10.1177/0278364914549607
I. J. Robotic Res.
Keywords
DocType
Volume
Robotic grasping,deep learning,RGB-D multi-modal data,Baxter,PR2,3D feature learning
Journal
34
Issue
ISSN
Citations 
4-5
0278-3649
286
PageRank 
References 
Authors
9.67
53
3
Search Limit
100286
Name
Order
Citations
PageRank
Ian Lenz132312.07
Honglak Lee26247398.39
Ashutosh Saxena34575227.88