Title
Depth Range Control In Visually Equivalent Light Field 3d
Abstract
3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.
Year
DOI
Venue
2021
10.1587/transele.2020DII0003
IEICE TRANSACTIONS ON ELECTRONICS
Keywords
DocType
Volume
3D display, light field, linear blending, depth range
Journal
E104C
Issue
ISSN
Citations 
2
1745-1353
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Munekazu Date174.56
Shinya Shimizu2418.39
Hideaki Kimata315029.40
Dan Mikami411817.60
Yoshinori Kusachi591.70