Title
Learning to Predict Indoor Illumination from a Single Image.
Abstract
We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.
Year
DOI
Venue
2017
10.1145/3130800.3130891
ACM Trans. Graph.
Keywords
DocType
Volume
deep learning, indoor illumination
Journal
abs/1704.00090
Issue
ISSN
Citations 
6
0730-0301
34
PageRank 
References 
Authors
1.27
19
7
Name
Order
Citations
PageRank
Marc-André Gardner122211.20
Kalyan Sunkavalli250031.75
Ersin Yumer31878.36
Xiaohui Shen4127850.50
Emiliano Gambaretto5341.27
Christian Gagné662752.38
Jean-françois Lalonde759037.69