Abstract | ||
---|---|---|
ABSTRACT We introduce a method to automatically convert a single panoramic input into a multi-cylinder image representation that supports real-time, free-viewpoint view synthesis for virtual reality. We apply an existing convolutional neural network trained on pinhole images to a cylindrical panorama with wrap padding to ensure agreement between the left and right edges. The network outputs a stack of semi-transparent panoramas at varying depths which can be easily rendered and composited with over blending. Initial experiments show that the method produces convincing parallax and cleaner object boundaries than a textured mesh representation. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1145/3450618.3469144 | International Conference on Computer Graphics and Interactive Techniques |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
5 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Richa Gadgil | 1 | 0 | 0.34 |
Reesa John | 2 | 0 | 0.34 |
Stefanie Zollmann | 3 | 227 | 22.58 |
Jonathan Ventura | 4 | 224 | 20.60 |