Title
LiT: Zero-Shot Transfer with Locked-image text Tuning
Abstract
This paper presents contrastive-tuning, a simple method employing contrastive training to align image and text mod-els while still taking advantage of their pre-training. In our empirical study we find that locked pre-trained image mod-els with unlocked text models work best. We call this in-stance of contrastive-tuning “Locked-image Tuning” (LiT), which just teaches a text model to read out good repre-sentations from a pre-trained image model for new tasks. A LiT model gains the capability of zero-shot transfer to new vision tasks, such as image classification or retrieval. The proposed LiT is widely applicable; it works reliably with multiple pre-training methods (supervised and unsu-pervised) and across diverse architectures (ResNet, Vision Transformers and MLP-Mixer) using three different image-text datasets. With the transformer-based pre-trained ViT-g/14 model, the LiT model achieves 84.5% zero-shot trans-fer accuracy on the ImageNet test set, and 81.1% on the challenging out-of-distribution ObjectNet test set.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01759
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Vision + language, Representation learning, Transfer/low-shot/long-tail learning
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Xiaohua Zhai120913.00
Xiao Wang200.34
Mustafa Ayoob Basil353.15
Andreas Steiner400.34
Daniel Keysers51737140.59
Alexander Kolesnikov615211.94
Lucas Beyer723213.50