Title
Learning high-quality depth map from 360° multi-exposure imagery
Abstract
Monocular 360° depth estimation is an important and challenging task in computer vision and 3D vision. Recently, deep neural networks have shown great abilities in estimating depth map from a single 360° image. However, existing 360° depth estimation networks typically employ well-exposed 360° images and corresponding depth maps as the supervision data, ignoring the learning of under-exposed and over-exposed images, which results in the generation of low-quality depth maps and the poor generalization capabilities of the networks. In the current paper, we train an improved convolutional neural network URectNet with a distortion weighted loss function to learn high-quality depth maps from multi-exposure 360° images. Firstly, we insert skip connections into the baseline network of 360° depth estimation to improve performance. Then, we design a distortion weighted loss function to eliminate the effect of distortion caused by 360° images during network training. Due to the lack of 360° multi-exposure images, we render a 360° multi-exposure dataset from Matterport3D for network training. The generalization capability of the network is enhanced by increasing the dynamic range of the training images. Finally, extensive experiments and ablation studies are provided to validate our method against existing state-of-the-art algorithms, demonstrating the ability of the proposed method to achieve a favorable performance.
Year
DOI
Venue
2022
10.1007/s11042-022-13340-x
Multimedia Tools and Applications
Keywords
DocType
Volume
Omnidirectional media, 360° imagery, Depth estimation, Multi-exposure
Journal
81
Issue
ISSN
Citations 
25
1380-7501
0
PageRank 
References 
Authors
0.34
3
4
Name
Order
Citations
PageRank
Xu Chao100.34
Yang Huamin200.34
Cheng Han301.01
Chao Zhang4939103.66