Title
Flat2Sphere: Learning Spherical Convolution for Fast Features from 360° Imagery.
Abstract
While 360° cameras offer tremendous new possibilities in vision, graphics, and augmented reality, the spherical images they produce make core feature extraction non-trivial. Convolutional neural networks (CNNs) trained on images from perspective cameras yield “flat filters, yet 360° images cannot be projected to a single plane without significant distortion. A naive solution that repeatedly projects the viewing sphere to all tangent planes is accurate, but much too computationally intensive for real problems. We propose to learn a spherical convolutional network that translates a planar CNN to process 360° imagery directly in its equirectangular projection. Our approach learns to reproduce the flat filter outputs on 360° data, sensitive to the varying distortion effects across the viewing sphere. The key benefits are 1) efficient feature extraction for 360° images and video, and 2) the ability to leverage powerful pre-trained networks researchers have carefully honed (together with massive labeled image training sets) for perspective images. We validate our approach compared to several alternative methods in terms of both raw CNN output accuracy as well as applying a state-of-the-art “flat object detector to 360° data. Our method yields the most accurate results while saving orders of magnitude in computation versus the existing exact reprojection solution.
Year
Venue
Field
2017
neural information processing systems
Graphics,Computer vision,Convolutional neural network,Convolution,Computer science,Equirectangular projection,Feature extraction,Augmented reality,Artificial intelligence,Distortion,Machine learning,Computation
DocType
Volume
Citations 
Journal
abs/1708.00919
4
PageRank 
References 
Authors
0.47
16
2
Name
Order
Citations
PageRank
Yu-Chuan Su18714.90
Kristen Grauman26258326.34