Abstract | ||
---|---|---|
Multi-shot person Re-IDentification (Re-ID) has recently received more research attention as its problem setting is more realistic compared to single-shot Re-ID in terms of application. While many large-scale single-shot Re-ID human image datasets have been released, most existing multishot Re-ID video sequence datasets containonly a few (i.e., several hundreds) human instances, which hinders further improvement of multi-shot Re-ID performance. To this end, we propose a deep cross-modality alignment network, which jointly explores both human sequence pairs and image pairs to facilitate training better multi-shot human Re-ID models, i.e., via transferring knowledge from image data to sequence data. To mitigate modality-to-modality mismatch issue, the proposed network is equipped with an image-to-sequence adaption module called cross-modality alignment sub-network, which successfully maps each human image into a pseudo human sequence to facilitate knowledge transferring and joint training. Extensive experimental results on several multi-shot person Re-ID benchmarks demonstrate great performance gain brought up by the proposed network.
|
Year | DOI | Venue |
---|---|---|
2017 | 10.1145/3123266.3123324 | MM '17: ACM Multimedia Conference
Mountain View
California
USA
October, 2017 |
Keywords | Field | DocType |
person Re-ID, cross-modality alignment network, knowledge transferring | Computer vision,Computer science,Data sequences,Artificial intelligence,Cross modality,Machine learning | Conference |
ISBN | Citations | PageRank |
978-1-4503-4906-2 | 3 | 0.37 |
References | Authors | |
20 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Zhichao Song | 1 | 45 | 1.40 |
Bingbing Ni | 2 | 1421 | 82.90 |
Yichao Yan | 3 | 90 | 6.70 |
Zhe Ren | 4 | 27 | 2.09 |
Yi Xu | 5 | 6 | 3.73 |
Xiaokang Yang | 6 | 3581 | 238.09 |