Title
TransFusion: Cross-view Fusion with Transformer for 3D Human Pose Estimation.
Abstract
Estimating the 2D human poses in each view is typically the first step in calibrated multi-view 3D pose estimation. But the performance of 2D pose detectors suffers from challenging situations such as occlusions and oblique viewing angles. To address these challenges, previous works derive point-to-point correspondences between different views from epipolar geometry and utilize the correspondences to merge prediction heatmaps or feature representations. Instead of post-prediction merge/calibration, here we introduce a transformer framework for multi-view 3D pose estimation, aiming at directly improving individual 2D predictors by integrating information from different views. Inspired by previous multi-modal transformers, we design a unified transformer architecture, named TransFusion, to fuse cues from both current views and neighboring views. Moreover, we propose the concept of epipolar field to encode 3D positional information into the transformer model. The 3D position encoding guided by the epipolar field provides an efficient way of encoding correspondences between pixels of different views. Experiments on Human 3.6M and Ski-Pose show that our method is more efficient and has consistent improvements compared to other fusion methods. Specifically, we achieve 25.8 mm MPJPE on Human 3.6M with only 5M parameters on 256 x 256 resolution.
Year
Venue
DocType
2021
British Machine Vision Conference
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
10
Name
Order
Citations
PageRank
Haoyu Ma113.40
Liangjian Chen213.06
Deying Kong302.03
Zhe Wang419824.41
Xingwei Liu511.71
Hao Tang682.16
Xiangyi Yan702.03
Yusheng Xie89514.63
Shih-Yao Lin95912.29
Xiaohui Xie1062452.73