Abstract | ||
---|---|---|
We present a novel approach for detecting objects and estimating their 3D pose in single images of cluttered scenes. Objects are given in terms of 3D models without accompanying texture cues. A deformable parts-based model is trained on clusters of silhouettes of similar poses and produces hypotheses about possible object locations at test time. Objects are simultaneously segmented and verified inside each hypothesis bounding region by selecting the set of superpixels whose collective shape matches the model silhouette. A final iteration on the 6-DOF object pose minimizes the distance between the selected image contours and the actual projection of the 3D model. We demonstrate successful grasps using our detection and pose estimate with a PR2 robot. Extensive evaluation with a novel ground truth dataset shows the considerable benefit of using shape-driven cues for detecting objects in heavily cluttered scenes. |
Year | DOI | Venue |
---|---|---|
2014 | 10.1109/ICRA.2014.6907430 | ICRA |
Keywords | Field | DocType |
image contours,texture cues,3d models,cluttered scenes,pose estimation,grasping,edge detection,object detection,single image 3d object detection,grippers,6-dof object pose,clutter,image texture,robot vision,pr2 robot | Object detection,Computer vision,Viola–Jones object detection framework,Object-class detection,Feature detection (computer vision),Pattern recognition,Image texture,Silhouette,3D pose estimation,Pose,Artificial intelligence,Engineering | Conference |
Volume | Issue | ISSN |
2014 | 1 | 1050-4729 |
Citations | PageRank | References |
23 | 1.22 | 18 |
Authors | ||
8 |
Name | Order | Citations | PageRank |
---|---|---|---|
Menglong Zhu | 1 | 186 | 8.21 |
Konstantinos G. Derpanis | 2 | 431 | 22.45 |
Yinfei Yang | 3 | 99 | 16.53 |
Samarth Brahmbhatt | 4 | 37 | 4.14 |
Mabel Zhang | 5 | 29 | 2.01 |
Cody J. Phillips | 6 | 45 | 4.80 |
matthieu lecce | 7 | 71 | 2.73 |
Konstantinos Daniilidis | 8 | 3122 | 255.45 |