Abstract | ||
---|---|---|
In this article, we propose a system design and implementation for output-sensitive reconstruction, transmission and rendering of 3D video avatars in distributed virtual environments. In our immersive telepresence system, users are captured by multiple RGBD sensors connected to a server that performs geometry reconstruction based on viewing feedback from remote telepresence parties. This feedback and reconstruction loop enables visibility-aware level-of-detail reconstruction of video avatars regarding geometry and texture data, and considers individual and groups of collocated users. Our evaluation reveals that our approach leads to a significant reduction of reconstruction times, network bandwidth requirements and round-trip times as well as rendering costs in many situations. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/TVCG.2020.3037360 | IEEE Transactions on Visualization and Computer Graphics |
Keywords | DocType | Volume |
Immersive telepresence,avatars,output-sensitive rendering,distributed virtual environments | Journal | 28 |
Issue | ISSN | Citations |
7 | 1077-2626 | 1 |
PageRank | References | Authors |
0.35 | 30 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Adrian Kreskowski | 1 | 3 | 1.08 |
Stephan Beck | 2 | 168 | 11.27 |
Bernd Froehlich | 3 | 163 | 12.39 |