Title
Pose-Guided Visible Part Matching for Occluded Person ReID
Abstract
Occluded person re-identification is a challenging task as the appearance varies substantially with various obstacles, especially in the crowd scenario. To address this issue, we propose a Pose-guided Visible Part Matching (PVPM) method that jointly learns the discriminative features with pose-guided attention and self-mines the part visibility in an end-to-end framework. Specifically, the proposed PVPM includes two key components: 1) pose-guided attention (PGA) method for part feature pooling that exploits more discriminative local features; 2) pose-guided visibility predictor (PVP) that estimates whether a part suffers the occlusion or not. As there are no ground truth training annotations for the occluded part, we turn to utilize the characteristic of part correspondence in positive pairs and self-mining the correspondence scores via graph matching. The generated correspondence scores are then utilized as pseudo-labels for visibility predictor (PVP). Experimental results on three reported occluded benchmarks show that the proposed method achieves competitive performance to state-of-the-art methods. The source codes are available at https://github.com/hh23333/PVPM
Year
DOI
Venue
2020
10.1109/CVPR42600.2020.01176
CVPR
DocType
Citations 
PageRank 
Conference
5
0.39
References 
Authors
0
4
Name
Order
Citations
PageRank
Shang Gao151.07
Jingya Wang2322.55
Huchuan Lu34827186.26
Zimo Liu4100.80