Title
NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction
Abstract
While NeRF [28] has shown great success for neural reconstruction and rendering, its limited MLP capacity and long per-scene optimization times make it challenging to model large-scale indoor scenes. In contrast, classical 3D reconstruction methods can handle large-scale scenes but do not produce realistic renderings. We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering. We process the input image sequence to predict per-frame local radiance fields via direct network inference. These are then fused using a novel recurrent neural network that incrementally reconstructs a global, sparse scene representation in real-time at 22 fps. This global volume can be further fine-tuned to boost rendering quality. We demonstrate that NeR-Fusionachieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods. 1 1 https://jetd1.github.io/NeRFusion-Web/
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00537
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
3D from multi-view and sensors, Image and video synthesis and generation
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Xiaoshuai Zhang192.20
Sai Bi2635.28
Kalyan Sunkavalli350031.75
Hao Su47343302.07
Zexiang Xu510110.17