Title
Dual-modality hard mining triplet-center loss for visible infrared person re-identification
Abstract
Visible infrared person re-identification (VI-reid) has gradually increased in popularity as an crucial branch of person re-identification (reid). It not only has intra-class variations caused by different viewpoint, pedestrian posture changes, complex backgrounds, resolution, occlusions existing in traditional visible visible person re-identification (VV-reid), but also has been subjected to enormous cross-modality discrepancies resulted from the difference in the reflection spectrum of visible light and infrared cameras. In this paper, we put forward a novel loss function, called dual-modality hard mining triplet-center loss (DTCL), to optimize intra-class and inter-class distance and to supervise the network to learn discriminative feature representations. The DTCL learns a visible modality center and an infrared modality center separately for each class and selects online novel cross-modality triplets and intra-modality triplets for each sample respectively. In particular, it makes the samples from the same class closer to the class center and pushes them away from the centers of the different classes. Besides, we also propose a dual-path part-based feature learning network (DPFLN) framework, to learn the local features and solve the problem of cross-modality discrepancies. We conduct experiments on two cross-modality reid datasets, and obtain promising results.
Year
DOI
Venue
2021
10.1016/j.knosys.2021.106772
Knowledge-Based Systems
Keywords
DocType
Volume
Deep learning,Visible infrared Person re-identification,Dual-modality hard mining triplet-center loss,Local feature
Journal
215
ISSN
Citations 
PageRank 
0950-7051
1
0.37
References 
Authors
37
4
Name
Order
Citations
PageRank
Xin Cai110.37
Li Liu2126461.72
Lei Zhu385451.69
Huaxiang Zhang443656.32