Title
Synthesizing light field from a single image with variable MPI and two network fusion
Abstract
AbstractWe propose a learning-based approach to synthesize a light field with a small baseline from a single image. We synthesize the novel view images by first using a convolutional neural network (CNN) to promote the input image into a layered representation of the scene. We extend the multiplane image (MPI) representation by allowing the disparity of the layers to be inferred from the input image. We show that, compared to the original MPI representation, our representation models the scenes more accurately. Moreover, we propose to handle the visible and occluded regions separately through two parallel networks. The synthesized images using these two networks are then combined through a soft visibility mask to generate the final results. To effectively train the networks, we introduce a large-scale light field dataset of over 2,000 unique scenes containing a wide range of objects. We demonstrate that our approach synthesizes high-quality light fields on a variety of scenes, better than the state-of-the-art methods.
Year
DOI
Venue
2020
10.1145/3414685.3417785
ACM Transactions on Graphics
Keywords
DocType
Volume
Light field, view synthesis, convolutional neural network
Journal
39
Issue
ISSN
Citations 
6
0730-0301
1
PageRank 
References 
Authors
0.35
0
2
Name
Order
Citations
PageRank
Qinbo Li111.36
N. K. Kalantari251120.87