Title
Learning to Parse Wireframes in Images of Man-Made Environments
Abstract
In this paper, we propose a learning-based approach to the task of automatically extracting a "wireframe" representation for images of cluttered man-made environments. The wireframe (see Fig. 1) contains all salient straight lines and their junctions of the scene that encode efficiently and accurately large-scale geometry and object shapes. To this end, we have built a very large new dataset of over 5,000 images with wireframes thoroughly labelled by humans. We have proposed two convolutional neural networks that are suitable for extracting junctions and lines with large spatial support, respectively. The networks trained on our dataset have achieved significantly better performance than state-of-the-art methods for junction detection and line segment detection, respectively. We have conducted extensive experiments to evaluate quantitatively and qualitatively the wireframes obtained by our method, and have convincingly shown that effectively and efficiently parsing wireframes for images of man-made environments is a feasible goal within reach. Such wireframes could benefit many important visual tasks such as feature correspondence, 3D reconstruction, vision-based mapping, localization, and navigation. The data and source code are available at https://github.com/huangkuns/wireframe.
Year
DOI
Venue
2018
10.1109/CVPR.2018.00072
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Keywords
Field
DocType
man-made environment images,feature correspondence,3D reconstruction,vision-based mapping,object shapes,large-scale geometry,salient straight lines,cluttered man-made environments,wireframe representation,learning-based approach,parse wireframes,line segment detection,junction detection,convolutional neural networks
Line segment,Computer vision,Pattern recognition,Source code,Convolutional neural network,Computer science,Image segmentation,Feature extraction,Artificial intelligence,Parsing,Salient,3D reconstruction
Conference
ISSN
ISBN
Citations 
1063-6919
978-1-5386-6421-6
4
PageRank 
References 
Authors
0.37
28
6
Name
Order
Citations
PageRank
Kun Huang153061.18
Wang, Yifan2321.52
Zihan Zhou383339.42
Tianjiao Ding440.71
Shenghua Gao5160766.89
Yi Ma614931536.21