Abstract | ||
---|---|---|
Modern 3D printers are capable of printing large-size light-field displays at high-resolutions. However, optimizing such displays in full 3D volume for a given light-field imagery is still a challenging task. Existing light field displays optimize over relatively small resolutions using a few co-planar layers in a 2.5D fashion to keep the problem tractable. In this paper, we propose a novel end-to-end optimization approach that encodes input light field imagery as a continuous-space implicit representation in a neural network. This allows fabricating high-resolution, attenuation-based volumetric displays that exhibit the target light fields. In addition, we incorporate the physical constraints of the material to the optimization such that the result can be printed in practice. Our simulation experiments demonstrate that our approach brings significant visual quality improvement compared to the multilayer and uniform grid-based approaches. We validate our simulations with fabricated prototypes and demonstrate that our pipeline is flexible enough to allow fabrications of both planar and non-planar displays.
|
Year | DOI | Venue |
---|---|---|
2020 | 10.1145/3414685.3417879 | ACM Transactions on Graphics |
Keywords | DocType | Volume |
3D printing,computational fabrication,light field,neural networks,volumetric display | Journal | 39 |
Issue | ISSN | Citations |
6 | 0730-0301 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Quan Zheng | 1 | 108 | 9.35 |
Vahid Babaei | 2 | 7 | 2.83 |
Gordon Wetzstein | 3 | 945 | 72.47 |
Hans-Peter Seidel | 4 | 12532 | 801.49 |
Zwicker Matthias | 5 | 2513 | 129.25 |
Gurprit Singh | 6 | 0 | 0.34 |