Title
Non-Rigid Neural Radiance Fields - Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video.
Abstract
In this tech report, we present the current state of our ongoing work on reconstructing Neural Radiance Fields (NERF) of general non-rigid scenes via ray bending. Non-rigid NeRF (NR-NeRF) takes RGB images of a deforming object (e.g., from a monocular video) as input and then learns a geometry and appearance representation that not only allows to reconstruct the input sequence but also to re-render any time step into novel camera views with high fidelity. In particular, we show that a consumer-grade camera is sufficient to synthesize convincing bullet-time videos of short and simple scenes. In addition, the resulting representation enables correspondence estimation across views and time, and provides rigidity scores for each point in the scene. We urge the reader to watch the supplemental videos for qualitative results.
Year
DOI
Venue
2021
10.1109/ICCV48922.2021.01272
ICCV
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Edgar Tretschk131.06
Ayush Tewari2777.05
Vladislav Golyanik32212.55
Michael Zollhöfer485242.25
Christoph Lassner500.34
Christian Theobalt63211159.16