Abstract | ||
---|---|---|
We propose a data-driven learned sky model, which we use for outdoor lighting estimation from a single image. As no large-scale dataset of images and their corresponding ground truth illumination is readily available,we use complementary datasets to train our approach,combining the vast diversity of illumination conditions of SUN360 with the radiometrically calibrated and physically accurate Laval HDR sky database. Our key contribution is to provide a holistic view of both lighting modeling and estimation, solving both problems end-to-end. From a test image, our method can directly estimate an HDR environment map of the lighting without relying on analytical lighting models. We demonstrate the versatility and expressivity of our learned sky model and show that it can be used to recover plausible illumination, leading to visually pleasant virtual object insertions. To further evaluate our method, we capture a dataset of HDR 360 degrees panoramas and show through extensive validation that we significantly outperform previous state-of-the-art. |
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/CVPR.2019.00709 | 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019) |
Field | DocType | Volume |
Virtual image,Computer vision,Pattern recognition,Computer science,Sky,Ground truth,Artificial intelligence,Standard test image,Reflection mapping,Expressivity | Journal | abs/1905.03897 |
ISSN | Citations | PageRank |
1063-6919 | 6 | 0.48 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yannick Hold-Geoffroy | 1 | 14 | 4.30 |
Akshaya Athawale | 2 | 6 | 0.48 |
Jean-françois Lalonde | 3 | 590 | 37.69 |