Abstract | ||
---|---|---|
Inverse tone mapping(iTM) is an operation to transform low-dynamic-range (LDR) content to high-dynamic-range (HDR) content, which is an effective technique to improve the visual experience. ITM has developed rapidly with deep learning algorithms in recent years. However, the great majority of deep-learning-based iTM methods are aimed at images and ignore the temporal correlations of consecutive frames in videos. In this paper, we propose a multi-scale video iTM network with deformable alignment, which increases time consistency in videos. We first align the input consecutive LDR frames at the feature level by deformable convolutions and then simultaneously use multi-frame information to generate the HDR frame. Additionally, we adopt a multi-scale iTM architecture with a pyramid pooling module, which enables our network to reconstruct details as well as global features. The proposed network achieves better performance compared to other iTM methods on quantitative metrics and gain a significant visual improvement. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1109/VCIP49819.2020.9301780 | 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP) |
Keywords | DocType | ISSN |
inverse tone mapping,high dynamic range,deformable convolution | Conference | 1018-8770 |
ISBN | Citations | PageRank |
978-1-7281-8069-4 | 0 | 0.34 |
References | Authors | |
0 | 3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jiaqi Zou | 1 | 0 | 1.35 |
Ke Mei | 2 | 0 | 0.34 |
Songlin Sun | 3 | 176 | 25.76 |