Title
Aggregating Deep Pyramidal Representations for Person Re-Identification
Abstract
Learning discriminative, view-invariant and multi-scale representations of person appearance with different semantic levels is of paramount importance for person Re Identification (Re-ID). A surge of effort has been spent by the community to learn deep Re-ID models capturing a holistic single semantic level feature representation. To improve the achieved results, additional visual attributes and body part-driven models have been considered. However, these require extensive human annotation labor or demand additional computational efforts. We argue that a pyramid-inspired method capturing multi-scale information may overcome such requirements. Precisely, multi-scale stripes that represent visual information of a person can be used by a novel architecture factorizing them into latent discriminative factors at multiple semantic levels. A multi-task loss is combined with a curriculum learning strategy to learn a discriminative and invariant person representation which is exploited for triplet-similarity learning. Results on three benchmark Re-ID datasets demonstrate that better performance than existing methods are achieved (e.g., more than 90% accuracy on the Duke-MTMC dataset).
Year
DOI
Venue
2019
10.1109/CVPRW.2019.00196
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Field
DocType
ISSN
Computer vision,Computer science,Artificial intelligence
Conference
2160-7508
Citations 
PageRank 
References 
2
0.35
0
Authors
3
Name
Order
Citations
PageRank
Niki Martinel134924.39
Gian Luca Foresti2447.06
C. Micheloni393462.52