Title
3DFS: Deformable Dense Depth Fusion and Segmentation for Object Reconstruction from a Handheld Camera.
Abstract
We propose an approach for 3D reconstruction and segmentation of a single object placed on a flat surface from an input video. Our approach is to perform dense depth map estimation for multiple views using a proposed objective function that preserves detail. The resulting depth maps are then fused using a proposed implicit surface function that is robust to estimation error, producing a smooth surface reconstruction of the entire scene. Finally, the object is segmented from the remaining scene using a proposed 2D-3D segmentation that incorporates image and depth cues with priors and regularization over the 3D volume and 2D segmentations. We evaluate 3D reconstructions qualitatively on our Object-Videos dataset, comparing to fusion, multiview stereo, and segmentation baselines. We also quantitatively evaluate the dense depth estimation using the RGBD Scenes V2 dataset [Henry et al. 2013] and the segmentation using keyframe annotations of the Object-Videos dataset.
Year
Venue
Field
2016
arXiv: Computer Vision and Pattern Recognition
Computer vision,Surface reconstruction,Scale-space segmentation,Pattern recognition,Computer science,Segmentation,Segmentation-based object categorization,Regularization (mathematics),Artificial intelligence,Depth perception,Depth map,3D reconstruction
DocType
Volume
Citations 
Journal
abs/1606.05002
1
PageRank 
References 
Authors
0.36
0
4
Name
Order
Citations
PageRank
Tanmay Gupta122.41
Daeyun Shin272.13
Naren Sivagnanadasan340.76
Derek Hoiem44998302.66