Title
Texture Semantically Aligned with Visibility-aware for Partial Person Re-identification
Abstract
In real person re-identification (ReID) tasks, pedestrians are often obscured by other pedestrians or objects; moreover, changes in poses or observation perspectives also commonly exist in partial-person ReID. To the best of our knowledge, few works simultaneously focus on these two issues. In this work, we propose a novel texture semantic alignment (TSA) approach with the visibility-aware for partial person ReID task where the occlusion issue and changes in poses are simultaneously explored in an end-to-end unified framework. Specifically, we first employ a texture alignment scheme with the semantic visibility of a person's image to solve the issue of changes in poses that can enhance the alignment and generalization capability of the models. Second, we design a human pose-based partial region alignment scheme to solve the occlusion problem that makes TSA method emphasize the shared body parts. Finally, these two networks jointly learn these aspects. Extensive experimental results demonstrate that our proposed TSA method is very effective and robust for simultaneously handling occlusion and changes in pose, and it can outperform state-of-the-art approaches by a large margin and achieves an improvement of 5% and 6.4% on the rank-1 accuracy over the visibility-aware part model (VPM) method (published in CVPR 2019) on the Partial ReID and Partial-iLIDS datasets, respectively.
Year
DOI
Venue
2020
10.1145/3394171.3413833
MM '20: The 28th ACM International Conference on Multimedia Seattle WA USA October, 2020
DocType
ISBN
Citations 
Conference
978-1-4503-7988-5
3
PageRank 
References 
Authors
0.37
21
6
Name
Order
Citations
PageRank
Li-Shuai Gao191.12
Hua Zhang225315.16
Zan Gao326127.71
Weili Guan44310.84
Zhiyong Cheng5496.05
Meng Wang63094167.38