Title
Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single Object Tracking in Point Clouds
Abstract
3D single object tracking (3D SOT) in LiDAR point clouds plays a crucial role in autonomous driving. Current approaches all follow the Siamese paradigm based on appearance matching. However, LiDAR point clouds are usually textureless and incomplete, which hinders effective appearance matching. Besides, previous methods greatly overlook the critical motion clues among targets. In this work, beyond 3D Siamese tracking, we introduce a motion-centric paradigm to handle 3D SOT from a new perspective. Following this paradigm, we propose a matching-free two-stage tracker M2-Track. At the 1 st -stage, M 2 -Track localizes the target within successive frames via motion transformation. Then it refines the target box through motion-assisted shape completion at the 2 nd -stage. Extensive experiments confirm that M 2 -Track significantly outperforms previous state-of-the-arts on three large-scale datasets while running at 57FPS (~ 8%, ~ 17% and ~ 22% precision gains on KITTI, NuScenes, and Waymo Open Dataset respectively). Further analysis verifies each component's effectiveness and shows the motioncentric paradigm's promising potential when combined with appearance matching. Code will be made available at https://github.com/Ghostish/Open3DSOT.
Year
DOI
Venue
2022
10.1109/CVPR52688.2022.00794
IEEE Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
Motion and tracking, 3D from multi-view and sensors, Navigation and autonomous driving
Conference
2022
Issue
Citations 
PageRank 
1
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Chaoda Zheng101.01
Xing Xu219927.30
Haiming Zhang300.34
Baoyuan Wang400.34
Shenghui Cheng500.34
Shuguang Cui652154.46
Zhen Li713515.45