Abstract | ||
---|---|---|
In recent years, various deep neural networks have been proposed to improve the performance in the single image super-resolution (SISR) task. The commonly used per-pixel MSE loss function captures less perceptual difference and tends to make the super-resolved images overly smooth, while the perceptual loss function defined on image features extracted from one or two layers of a pretrained network yields more visually pleasing results. We propose a new perceptual loss function via combining features from multiple levels, which incorporates the discrepancy between the reconstruction and the ground truth in different structures. In addition, some variants of the proposed perceptual loss are explored. Extensive quantitative and qualitative comparisons with the state-of-the-art methods demonstrate that our loss function can drive the same network to produce better results when used alone or combined with other loss functions. |
Year | DOI | Venue |
---|---|---|
2020 | 10.1007/s11042-020-08878-7 | MULTIMEDIA TOOLS AND APPLICATIONS |
Keywords | DocType | Volume |
Image super-resolution,Deep neural network,Perceptual loss function | Journal | 79.0 |
Issue | ISSN | Citations |
29-30 | 1380-7501 | 0 |
PageRank | References | Authors |
0.34 | 0 | 5 |