Title
Pre-Trained Image Processing Transformer
Abstract
As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the representation ability of transformer and its variant architectures. In this paper, we study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks.
Year
DOI
Venue
2021
10.1109/CVPR46437.2021.01212
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021
DocType
ISSN
Citations 
Conference
1063-6919
1
PageRank 
References 
Authors
0.34
0
10
Name
Order
Citations
PageRank
hanting chen1297.32
Yunhe Wang266.82
Tianyu Guo311.02
Chang Xu410620.21
Yiping Deng511.70
Zhenhua Liu611.02
Siwei Ma72229203.42
Chunjing Xu86116.98
Chao Xu9344.24
Wen Gao1010.34