Title
Viewpoint invariant matching via developable surfaces
Abstract
Stereo systems, time-of-flight cameras, laser range sensors and consumer depth cameras nowadays produce a wealth of image data with depth information (RGBD), yet the number of approaches that can take advantage of color and geometry data at the same time is quite limited. We address the topic of wide baseline matching between two RGBD images, i.e. finding correspondences from largely different viewpoints for recognition, model fusion or loop detection. Here we normalize local image features with respect to the underlying geometry and show a significantly increased number of correspondences. Rather than moving a virtual camera to some position in front of a dominant scene plane, we propose to unroll developable scene surfaces and detect features directly in the "wall paper" of the scene. This allows viewpoint invariant matching also in scenes with curved architectural elements or with objects like bottles, cans or (partial) cones and others. We prove the usefulness of our approach using several real world scenes with different objects.
Year
DOI
Venue
2012
10.1007/978-3-642-33868-7_7
ECCV Workshops
Keywords
Field
DocType
depth information,consumer depth camera,image data,real world scene,geometry data,viewpoint invariant,dominant scene plane,different viewpoint,different object,rgbd image,developable surface,developable scene surface
Computer vision,Normalization (statistics),Developable surface,Computer graphics (images),Virtual camera,Feature (computer vision),Viewpoints,Computer science,Invariant (mathematics),Artificial intelligence
Conference
Volume
ISSN
Citations 
7584
0302-9743
2
PageRank 
References 
Authors
0.37
12
3
Name
Order
Citations
PageRank
Bernhard Zeisl11146.75
Kevin Köser228215.05
Marc Pollefeys37671475.90