Title
Residual Contrastive Learning for Image Reconstruction: Learning Transferable Representations from Noisy Images.
Abstract
This paper is concerned with contrastive learning (CL) for low-level image restoration and enhancement tasks. We propose a new label-efficient learning paradigm based on residuals, residual contrastive learning (RCL), and derive an unsupervised visual representation learning framework, suitable for low-level vision tasks with noisy inputs. While supervised image reconstruction aims to minimize residual terms directly, RCL alternatively builds a connection between residuals and CL by defining a novel instance discrimination pretext task, using residuals as the discriminative feature. Our formulation mitigates the severe task misalignment between instance discrimination pretext tasks and downstream image reconstruction tasks, present in existing CL frameworks. Experimentally, we find that RCL can learn robust and transferable representations that improve the performance of various downstream tasks, such as denoising and super resolution, in comparison with recent self-supervised methods designed specifically for noisy inputs. Additionally, our unsupervised pre-training can significantly reduce annotation costs whilst maintaining performance competitive with fully-supervised image reconstruction.
Year
DOI
Venue
2022
10.24963/ijcai.2022/406
European Conference on Artificial Intelligence
Keywords
DocType
Citations 
Machine Learning: Self-supervised Learning,Computer Vision: Transfer, low-shot, semi- and un- supervised learning,Machine Learning: Multi-task and Transfer Learning,Machine Learning: Unsupervised Learning
Conference
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Nanqing Dong1263.53
Matteo Maggioni200.68
Yongxin Yang300.34
Eduardo Pérez-Pellitero401.01
Ales Leonardis51636147.33
Steven McDonagh600.68