Abstract | ||
---|---|---|
The aim of single image super-resolution (SR) is to generate a high-resolution (HR) image from a low-resolution (LR) observable image. In this paper, we address this task by integrating sparse coding and dictionary learning schemes into an end-to-end deep architecture. More specifically, we propose a new non-linear dictionary learning layer composed of a finite number of recurrent units to solve the sparse codes and also to yield the relevant gradients to update the dictionary. In addition, we present a new deep network architecture using the proposed non-linear layers, where two separate parallel dictionaries are adopted to represent the LR and HR images respectively. The whole network is optimized by back propagation, constraining not only reconstruction errors between the restored and the ground truth HR images but also between the sparse codes of the LR and HR image pairs. Various datasets are used to evaluate the performance of the proposed approach and it is shown to outperform many state-of-the-art single image super-resolution algorithms. |
Year | Venue | Keywords |
---|---|---|
2017 | 2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) | Super-resolution, Dictionary Learning, Deep learning |
Field | DocType | ISSN |
Iterative reconstruction,Finite set,Pattern recognition,Computer science,Neural coding,Network architecture,Ground truth,Artificial intelligence,Backpropagation,Image resolution,Encoding (memory) | Conference | 1522-4880 |
Citations | PageRank | References |
1 | 0.35 | 0 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yang Liu | 1 | 1 | 0.69 |
Qingchao Chen | 2 | 16 | 3.97 |
I. J. Wassell | 3 | 5 | 1.51 |