Abstract | ||
---|---|---|
We address the person re-identification problem by effectively exploiting a globally discriminative feature representation from a sequence of tracked human regions/patches. This is in contrast to previous person re-id works, which rely on either single frame based person to person patch matching, or graph based sequence to sequence matching. We show that a progressive/sequential fusion framework based on long short term memory (LSTM) network aggregates the frame-wise human region representation at each time stamp and yields a sequence level human feature representation. Since LSTM nodes can remember and propagate previously accumulated good features and forget newly input inferior ones, even with simple hand-crafted features, the proposed recurrent feature aggregation network (RFA-Net) is effective in generating highly discriminative sequence level human representations. Extensive experimental results on two person re-identification benchmarks demonstrate that the proposed method performs favorably against state-of-the-art person re-identification methods. |
Year | DOI | Venue |
---|---|---|
2017 | 10.1007/978-3-319-46466-4_42 | COMPUTER VISION - ECCV 2016, PT VI |
Keywords | DocType | Volume |
Person re-identification, Feature fusion, Long short term memory networks | Journal | 9910 |
ISSN | Citations | PageRank |
0302-9743 | 42 | 1.03 |
References | Authors | |
24 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yichao Yan | 1 | 90 | 6.70 |
Bingbing Ni | 2 | 1421 | 82.90 |
Zhichao Song | 3 | 45 | 1.40 |
Chao Ma | 4 | 637 | 25.28 |
Yan Yan | 5 | 784 | 38.14 |
Xiaokang Yang | 6 | 3581 | 238.09 |