Title | ||
---|---|---|
Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos |
Abstract | ||
---|---|---|
We introduce an approach for pre-training egocentric video models using large-scale third-person video datasets. Learning from purely egocentric data is limited by low dataset scale and diversity, while using purely exocentric (third-person) data introduces a large domain mismatch. Our idea is to discover latent signals in third-person video that are predictive of key egocentric-specific properties. Incorporating these signals as knowledge distillation losses during pre-training results in models that benefit from both the scale and diversity of third-person video data, as well as representations that capture salient egocentric properties. Our experiments show that our "Ego-Exo" framework can be seamlessly integrated into standard video models; it outperforms all baselines when fine-tuned for egocentric activity recognition, achieving state-of-the-art results on Charades-Ego and EPIC-Kitchens-100. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1109/CVPR46437.2021.00687 | 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 |
DocType | ISSN | Citations |
Conference | 1063-6919 | 0 |
PageRank | References | Authors |
0.34 | 14 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yanghao Li | 1 | 194 | 13.98 |
Tushar Nagarajan | 2 | 18 | 1.62 |
Bo Xiong | 3 | 58 | 5.74 |
Kristen Grauman | 4 | 6258 | 326.34 |