Title
Omnidirectional Feature Learning For Person Re-Identification
Abstract
Person re-identification (PReID) has received increasing attention due to it being an important role in intelligent surveillance. Many state-of-the-art PReID methods are part-based deep models. Most of these models focus on learning the part feature representation of a person's body from the horizontal direction. However, the feature representation of the body from the vertical direction is usually ignored. In addition, the relationships between these part features and different feature channels are not considered. In this paper, we introduce a multi-branch deep model for PReID. Specifically, the model consists of five branches. Among the five branches, two branches learn the part features with spatial information from horizontal and vertical orientations; one branch aims to learn the interdependencies between different feature channels generated by the last convolution layer of the backbone network; the remaining two branches are identification and triplet sub-networks in which the discriminative global feature and a corresponding measurement can be learned simultaneously. All five branches can improve the quality of representation learning. We conduct extensive comparison experiments on three benchmarks, including Market-1501, CUHK03, and DukeMTMC-reID. The proposed deep framework outperforms other competitive state-of-the-art methods. The code is available at https://github.com/caojunying/person-reidentification.
Year
DOI
Venue
2019
10.1109/ACCESS.2019.2901764
IEEE ACCESS
Keywords
DocType
Volume
Person re-identification, deep learning, part feature, triplet model, identification model
Journal
7
ISSN
Citations 
PageRank 
2169-3536
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Di Wu193.88
Hong-Wei Yang200.34
De-Shuang Huang35532357.50
Chang-an Yuan4859.88
Xiao Qin500.34
Yang Zhao6836116.78
Xin-Yong Zhao700.34
Jian-Hong Sun800.34