Abstract | ||
---|---|---|
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/CVPR.2016.401 | 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) |
Field | DocType | Volume |
Annotation,Pattern recognition,Segmentation,Computer science,Artificial intelligence,Labeled data,Machine learning,Bounding overwatch,Cognitive neuroscience of visual object recognition | Journal | abs/1511.03240 |
Issue | ISSN | Citations |
1 | 1063-6919 | 17 |
PageRank | References | Authors |
1.04 | 44 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jun Xie | 1 | 59 | 3.50 |
Martin Kiefel | 2 | 38 | 2.23 |
Ming-Ting Sun | 3 | 1984 | 169.84 |
Andreas Geiger | 4 | 4256 | 178.81 |