Title
DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting
Abstract
Recent progress has shown that large-scale pre-training using contrastive image-text pairs can be a promising alternative for high-quality visual representation learning from natural language supervision. Benefiting from a broader source of supervision, this new paradigm exhibits impressive transferability to downstream classification tasks and datasets. However, the problem of transferring the knowledge learned from image-text pairs to more complex dense prediction tasks has barely been visited. In this work, we present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP. Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models. By further using the contextual information from the image to prompt the language model, we are able to facilitate our model to better exploit the pretrained knowledge. Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones including both CLIP models and ImageNet pre-trained models. Extensive experiments demonstrate the superior performance of our methods on semantic segmentation, object detection, and instance segmentation tasks. Code is available at https://github.com/raoyongming/DenseCLIP.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.01755
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Vision + language, Representation learning, Segmentation,grouping and shape analysis
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Rao, Yongming1329.34
Wenliang Zhao201.35
Guangyi Chen303.72
Yansong Tang4314.90
Zheng Zhu511.71
Guan Huang611.37
Jie Zhou7925.64
Jiwen Lu83105153.88