Title
Deep Cross-Modality Alignment for Multi-Shot Person Re-IDentification.
Abstract
Multi-shot person Re-IDentification (Re-ID) has recently received more research attention as its problem setting is more realistic compared to single-shot Re-ID in terms of application. While many large-scale single-shot Re-ID human image datasets have been released, most existing multishot Re-ID video sequence datasets containonly a few (i.e., several hundreds) human instances, which hinders further improvement of multi-shot Re-ID performance. To this end, we propose a deep cross-modality alignment network, which jointly explores both human sequence pairs and image pairs to facilitate training better multi-shot human Re-ID models, i.e., via transferring knowledge from image data to sequence data. To mitigate modality-to-modality mismatch issue, the proposed network is equipped with an image-to-sequence adaption module called cross-modality alignment sub-network, which successfully maps each human image into a pseudo human sequence to facilitate knowledge transferring and joint training. Extensive experimental results on several multi-shot person Re-ID benchmarks demonstrate great performance gain brought up by the proposed network.
Year
DOI
Venue
2017
10.1145/3123266.3123324
MM '17: ACM Multimedia Conference Mountain View California USA October, 2017
Keywords
Field
DocType
person Re-ID, cross-modality alignment network, knowledge transferring
Computer vision,Computer science,Data sequences,Artificial intelligence,Cross modality,Machine learning
Conference
ISBN
Citations 
PageRank 
978-1-4503-4906-2
3
0.37
References 
Authors
20
6
Name
Order
Citations
PageRank
Zhichao Song1451.40
Bingbing Ni2142182.90
Yichao Yan3906.70
Zhe Ren4272.09
Yi Xu563.73
Xiaokang Yang63581238.09