Title
Tracking People by Predicting 3D Appearance, Location and Pose
Abstract
We present an approach for tracking people in monocular videos by predicting their future 3D representations. To achieve this, we first lift people to 3D from a single frame in a robust manner. This lifting includes information about the 3D pose of the person, their location in the 3D space, and the 3D appearance. As we track a person, we collect 3D observations over time in a tracklet representation. Given the 3D nature of our observations, we build temporal models for each one of the previous attributes. We use these models to predict the future state of the tracklet, including 3D appearance, 3D location, and 3D pose. For a future frame, we compute the similarity between the predicted state of a tracklet and the single frame observations in a probabilistic manner. Association is solved with simple Hungarian matching, and the matches are used to update the respective tracklets. We evaluate our approach on various benchmarks and report state-of-the-art results. Code and models are available at: https://brjathu.github.io/PHALP.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00276
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Pose estimation and tracking, 3D from single images
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
4
Name
Order
Citations
PageRank
Jathushan Rajasegaran1134.62
Georgios Pavlakos21838.80
Angjoo Kanazawa327210.36
Jitendra Malik4394453782.10