Abstract | ||
---|---|---|
ABSTRACTMulti-modal tracking gains attention due to its ability to be more accurate and robust in complex scenarios compared to traditional RGB-based tracking. Its key lies in how to fuse multi-modal data and reduce the gap between modalities. However, multi-modal tracking still severely suffers from data deficiency, thus resulting in the insufficient learning of fusion modules. Instead of building such a fusion module, in this paper, we provide a new perspective on multi-modal tracking by attaching importance to the multi-modal visual prompts. We design a novel multi-modal prompt tracker (ProTrack), which can transfer the multi-modal inputs to a single modality by the prompt paradigm. By best employing the tracking ability of pre-trained RGB trackers learning at scale, our ProTrack can achieve high-performance multi-modal tracking by only altering the inputs, even without any extra training on multi-modal data. Extensive experiments on 5 benchmark datasets demonstrate the effectiveness of the proposed ProTrack. |
Year | DOI | Venue |
---|---|---|
2022 | 10.1145/3503161.3547851 | International Multimedia Conference |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jinyu Yang | 1 | 0 | 0.34 |
Zhe Li | 2 | 0 | 0.34 |
Feng Zheng | 3 | 369 | 31.93 |
Ales Leonardis | 4 | 1636 | 147.33 |
Jingkuan Song | 5 | 0 | 0.34 |