Abstract | ||
---|---|---|
Attention mechanisms have achieved success in video-based person re-identification (re-ID). However, current global attentions tend to focus on the most salient parts, e.g., clothes, and ignore other subtle but valuable cues, e.g., hair, bag, and shoes. They still do not make full use of valuable information from diverse parts of human bodies. To tackle this issue, we propose a Diverse Part Attentive Network (DPAN) to exploit discriminative and diverse body cues. The framework consists of two modules: spatial diverse part attention and temporal diverse part attention. The spatial module utilizes channel grouping to exploit diverse parts of human bodies including salient and subtle parts. The temporal module aims to learn diverse weights for fusing learned features. Besides, this framework is lightweight, which introduces marginal parameters and computational complexities. Extensive experiments were conducted on three popular benchmarks, i.e. iLIDS-VID, PRID2011 and MARS. Our method achieves competitive performance on these datasets compared with state-of-the-art methods. (c) 2021 Elsevier B.V. All rights reserved. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1016/j.patrec.2021.05.020 | PATTERN RECOGNITION LETTERS |
Keywords | DocType | Volume |
Person re-identification, Person retrieval, Self-attention | Journal | 149 |
ISSN | Citations | PageRank |
0167-8655 | 0 | 0.34 |
References | Authors | |
0 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Xiujun Shu | 1 | 2 | 2.42 |
Ge Li | 2 | 147 | 13.87 |
Longhui Wei | 3 | 0 | 0.34 |
Zhong Jiaxing | 4 | 0 | 1.35 |
Xianghao Zang | 5 | 0 | 0.68 |
Shiliang Zhang | 6 | 1213 | 66.09 |
Yaowei Wang | 7 | 134 | 29.62 |
Yongsheng Liang | 8 | 13 | 4.00 |
Qi Tian | 9 | 6443 | 331.75 |