Title | ||
---|---|---|
Spatial-Temporal Space Hand-in-Hand: Spatial-Temporal Video Super-Resolution via Cycle-Projected Mutual Learning |
Abstract | ||
---|---|---|
Spatial-Temporal Video Super-Resolution (ST-VSR) aims to generate super-resolved videos with higher resolution (HR) and higher frame rate (HFR). Quite intuitively, pioneering two-stage based methods complete ST-VSR by directly combining two sub-tasks: Spatial Video Super-Resolution (S-VSR) and Temporal Video Super-Resolution (T-VSR) but ignore the reciprocal relations among them. Specifically, 1) T-VSR to S-VSR: temporal correlations help accurate spatial detail representation with more clues; 2) S-VSR to T-VSR: abundant spatial information contributes to the refinement of temporal prediction. To this end, we propose a one-stage based Cycle-projected Mutual learning network (CycMu-Net) for ST-VSR, which makes full use of spatial-temporal correlations via the mutual learning between S-VSR and T-VSR. Specifically, we propose to exploit the mutual information among them via iterative up-and-down projections, where the spatial and temporal features are fully fused and distilled, helping the high-quality video reconstruction. Besides extensive experiments on benchmark datasets, we also compare our proposed CycMu-Net with S-VSR and T-VSR tasks, demonstrating that our method significantly outperforms state-of-the-art methods. Codes are publicly available at: https://github.com/hhhhhumengshun/CycMuNet. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPR52688.2022.00356 | IEEE Conference on Computer Vision and Pattern Recognition |
Keywords | DocType | Volume |
Image and video synthesis and generation | Conference | 2022 |
Issue | ISSN | Citations |
1 | 2022 CVPR | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Mengshun Hu | 1 | 0 | 0.34 |
Kui Jiang | 2 | 94 | 17.91 |
Liang Liao | 3 | 0 | 1.69 |
Jing Xiao | 4 | 312 | 27.78 |
Junjun Jiang | 5 | 1138 | 74.49 |
Zheng Wang | 6 | 352 | 36.33 |