Title
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions
Abstract
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct “mask-and-predict” pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.
Year
Venue
DocType
2021
NAACL-HLT
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
6
Name
Order
Citations
PageRank
Liunian Harold Li101.01
Haoxuan You2374.87
Zhecan Wang3202.74
Zareian, Alireza474.20
Shih-Fu Chang5130151101.53
Kai-Wei Chang64735276.81