Title
Knowledge self-distillation for visible-infrared cross-modality person re-identification
Abstract
Visible-Infrared cross-modality person Re-IDentification (VI-ReID) is a tough task due to the large modality discrepancy and intra-modality variations. Even so, increasing interest has still been attracted by virtue of its significant role in public security. In this paper, we propose a novel VI-ReID method based on Knowledge Self-Distillation (KSD), which aims to improve the discrimination ability of the common neural network through better feature exploration. KSD is achieved by first constructing shallow recognizers with the same structure as the deepest recognizer in the same convolutional neural network and then using the deepest one to teach shallower ones under multi-dimensional supervision. Subsequently, the lower-level features extracted from shallower layers that have absorbed deep knowledge further boost the higher-level feature learning in turn. During the training process, multi-dimensional loss functions are integrated as the mentor for more effective learning supervision. Finally, a VI-ReID model with better feature representation capability is produced via abundant knowledge transfer and feedback. Extensive experiments on two public databases demonstrate the significant superiority of the proposed method in terms of identification accuracy. Furthermore, our method is also proved to be effective to achieve model lightweight on the premise of guaranteeing the performance, which indicates the huge application potential on resource-limited edge devices.
Year
DOI
Venue
2022
10.1007/s10489-021-02814-4
Applied Intelligence
Keywords
DocType
Volume
Visible-Infrared person Re-IDentification (VI-ReID), Cross-modality, Knowledge self-distillation (KSD)
Journal
52
Issue
ISSN
Citations 
9
0924-669X
0
PageRank 
References 
Authors
0.34
10
5
Name
Order
Citations
PageRank
Yu Zhou101.01
Rui Li200.34
Yanjing Sun3108.90
Kaiwen Dong400.34
Song Li51811.41