Abstract | ||
---|---|---|
A range of video modeling tasks, from optical flow to multiple object tracking, share the same fundamental challenge: establishing space-time correspondence. Yet, approaches that dominate each space differ. We take a step to-wards bridging this gap by extending the recent contrastive random walk formulation to much denser, pixel-level spacetime graphs. The main contribution is introducing hierarchy into the search problem by computing the transition matrix between two frames in a coarse-to-fine manner, forming a multiscale contrastive random walk when ex-tended in time. This establishes a unified technique for self-supervised learning of optical flow, keypoint tracking, and video object segmentation. Experiments demonstrate that, for each of these tasks, the unified model achieves performance competitive with strong self-supervised approaches specific to that task. 1 1 Project page at https://jasonbian97.github.io/flowwalk |
Year | DOI | Venue |
---|---|---|
2022 | 10.1109/CVPR52688.2022.00640 | IEEE Conference on Computer Vision and Pattern Recognition |
Keywords | DocType | Volume |
Motion and tracking | Conference | 2022 |
Issue | Citations | PageRank |
1 | 0 | 0.34 |
References | Authors | |
0 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhangxing Bian | 1 | 0 | 0.34 |
Allan Jabri | 2 | 36 | 3.04 |
Alexei A. Efros | 3 | 10301 | 634.66 |
Andrew Owens | 4 | 74 | 5.13 |