Title
TANet: Target Attention Network for Video Bit-Depth Enhancement
Abstract
Video bit-depth enhancement (VBDE) reconstructs high-bit-depth (HBD) frames from a low-bit-depth (I,BD) video sequence. As neighboring frames contain a considerable amount of complementary information related to the center frame, it is vital for VBDE to exploit neighboring frames as much as possible. Conventional VBDE algorithms with explicit alignment across frames attempt to warp each neighboring frame to the center frame with estimated optical flow, taking into account only pairwise correlation. Most spatiotemporal fusion approaches involve direct concatenation or 3D convolution and treat all features equally, failing to focus on information related to the center frame. Therefore, in this paper, we introduce an improved nonlocal block as a global attentive alignment (GAA) module, which takes the whole input video sequence into consideration to capture features that are globally correlated, to perform implicit alignment. Furthermore, given the bulk of features extracted from the center and neighboring frames, we propose target-guided attention (TGA). TGA can exploit more center-frame-related details and facilitate feature fusion. The proposed network (dubbed TANet) is capable of effectively eliminating false contours and recovering the center frame in high quality, as demonstrated by the experimental results. TANet outperforms state-of-the-art models in terms of both PSNR and SSIM with low time consumption.
Year
DOI
Venue
2022
10.1109/TMM.2021.3115039
IEEE TRANSACTIONS ON MULTIMEDIA
Keywords
DocType
Volume
Video bit-depth enhancement, self-attention mechanism, spatiotemporal feature fusion
Journal
24
ISSN
Citations 
PageRank 
1520-9210
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Jing Liu1178188.09
Ziwen Yang200.34
Yuting Su389371.78
Xiaokang Yang43581238.09