Title
Manhattan Room Layout Reconstruction From A Single 360 Degrees Image: A Comparative Study Of State-Of-The-Art Methods
Abstract
Recent approaches for predicting layouts from 360 degrees panoramas produce excellent results. These approaches build on a common framework consisting of three steps: a pre-processing step based on edge-based alignment, prediction of layout elements, and a post-processing step by fitting a 3D layout to the layout elements. Until now, it has been difficult to compare the methods due to multiple different design decisions, such as the encoding network (e.g., SegNet or ResNet), type of elements predicted (e.g., corners, wall/floor boundaries, or semantic segmentation), or method of fitting the 3D layout. To address this challenge, we summarize and describe the common framework, the variants, and the impact of the design decisions. For a complete evaluation, we also propose extended annotations for the Matterport3D dataset (Chang et al.: Matterport3d: learning from rgb-d data in indoor environments. , 2017), and introduce two depth-based evaluation metrics.
Year
DOI
Venue
2021
10.1007/s11263-020-01426-8
INTERNATIONAL JOURNAL OF COMPUTER VISION
Keywords
DocType
Volume
3D room layout, Deep learning, Single image 3D, Manhattan world
Journal
129
Issue
ISSN
Citations 
5
0920-5691
1
PageRank 
References 
Authors
0.35
0
8
Name
Order
Citations
PageRank
Chuhang Zou111.03
Su Jheng-Wei211.03
Chi-Han Peng3484.61
R. Alex Colburn463452.28
Qi Shan574233.49
Peter Wonka62854165.59
Hung-Kuo Chu731027.04
Derek Hoiem84998302.66