Title
Predicting Complete 3D Models of Indoor Scenes.
Abstract
One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a $complete$ 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the well-known challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.
Year
Venue
Field
2015
CoRR
Computer vision,Visual reasoning,Pattern recognition,3d shapes,Computer science,Segmentation,Artificial intelligence,Parsing,Machine learning,Robotics
DocType
Volume
Citations 
Journal
abs/1504.02437
9
PageRank 
References 
Authors
0.56
31
3
Name
Order
Citations
PageRank
Ruiqi Guo156422.10
Chuhang Zou2101.24
Derek Hoiem34998302.66