Title
VPLNet: Deep Single View Normal Estimation With Vanishing Points and Lines
Abstract
We present a novel single-view surface normal estimation method that combines traditional line and vanishing point analysis with a deep learning approach. Starting from a color image and a Manhattan line map, we use a deep neural network to regress on a dense normal map, and a dense Manhattan label map that identifies planar regions aligned with the Manhattan directions. We fuse the normal map and label map in a fully differentiable manner to produce a refined normal map as final output. To do so, we softly decompose the output into a Manhattan part and a non-Manhattan part. The Manhattan part is treated by discrete classification and vanishing points, while the non-Manhattan part is learned by direct supervision. Our method achieves state-of-the-art results on standard single-view normal estimation benchmarks. More importantly, we show that by using vanishing points and lines, our method has better generalization ability than existing works. In addition, we demonstrate how our surface normal network can improve the performance of depth estimation networks, both quantitatively and qualitatively, in particular, in 3D reconstructions of walls and other flat surfaces.
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.00077
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords
DocType
ISSN
vanishing point analysis,deep learning approach,color image,Manhattan line map,deep neural network,dense normal map,dense Manhattan label map,planar regions,fully differentiable manner,Manhattan part,nonManhattan part,discrete classification,direct supervision,surface normal network,depth estimation networks,single-view surface normal estimation method
Conference
1063-6919
ISBN
Citations 
PageRank 
978-1-7281-7169-2
0
0.34
References 
Authors
20
5
Name
Order
Citations
PageRank
Rui Wang1855.98
David Geraghty200.34
Kevin Matzen3665.00
Richard Szeliski4213002104.74
Jan-Michael Frahm52847141.20