Abstract | ||
---|---|---|
Most of the existing video face super-resolution (VFSR) methods are trained and evaluated on VoxCeleb1, which is designed specifically for speaker identification and the frames in this dataset are of low quality. As a consequence, the VFSR models trained on this dataset can not output visual-pleasing results. In this paper, we develop an automatic and scalable pipeline to collect a high-quality video face dataset (VFHQ), which contains over 16, 000 high-fidelity clips of diverse interview scenarios. To verify the necessity of VFHQ, we further conduct experiments and demonstrate that VFSR models trained on our VFHQ dataset can generate results with sharper edges and finer textures than those trained on VoxCeleb1. In addition, we show that the temporal information plays a pivotal role in eliminating video consistency issues as well as further improving visual performance. Based on VFHQ, by analyzing the benchmarking study of several state-of-the-art algorithms under bicubic and blind settings. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPRW56347.2022.00081 | 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) |
Keywords | DocType | ISSN |
video consistency issues,high-quality dataset,existing video face super-resolution methods,VoxCeleb1,speaker identification,VFSR models,output visual-pleasing results,automatic pipeline,scalable pipeline,high-quality video face dataset,high-fidelity clips,diverse interview scenarios,VFHQ dataset | Conference | 2160-7508 |
ISBN | Citations | PageRank |
978-1-6654-8740-5 | 0 | 0.34 |
References | Authors | |
7 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Liangbin Xie | 1 | 0 | 0.34 |
Xintao Wang | 2 | 144 | 9.14 |
Honglun Zhang | 3 | 0 | 0.34 |
Chao Dong | 4 | 2064 | 80.72 |
Ying Shan | 5 | 0 | 0.34 |