Abstract | ||
---|---|---|
Referring image segmentation is a fundamental vision-language task that aims to segment out an object referred to by a natural language expression from an image. One of the key challenges behind this task is leveraging the referring expression for highlighting relevant positions in the image. A paradigm for tackling this problem is to leverage a powerful vision-language (“cross-madal”) decoder to fuse features independently extracted from a vision encoder and a language encoder. Recent methods have made remarkable advancements in this paradigm by exploiting Transformers as cross-modal decoders, concurrent to the Transformer's overwhelming success in many other vision-language tasks. Adopting a different approach in this work, we show that significantly better cross-modal alignments can be achieved through the early fusion of linguistic and visual features in intermediate layers of a vision Transformer encoder network. By conducting cross-modal feature fusion in the visual feature encoding stage, we can leverage the well-proven correlation modeling power of a Transformer encoder for excavating helpful multi-modal context. This way, accurate segmentation results are readily harvested with a light-weight mask predictor. Without bells and whistles, our method surpasses the previous state-of-the-art methods on Ref CoCo, RefCOCO+, and G-Ref by large margins. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPR52688.2022.01762 | IEEE Conference on Computer Vision and Pattern Recognition |
Keywords | DocType | Volume |
Vision + language, Segmentation,grouping and shape analysis | Conference | 2022 |
Issue | Citations | PageRank |
1 | 0 | 0.34 |
References | Authors | |
0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhao Yang | 1 | 3 | 1.05 |
Jiaqi Wang | 2 | 77 | 4.20 |
Yansong Tang | 3 | 0 | 0.34 |
Kai Chen | 4 | 130 | 8.65 |
Hengshuang Zhao | 5 | 65 | 8.99 |
Philip H. S. Torr | 6 | 9140 | 636.18 |