Title
Conditional Feature Embedding by Visual Clue Correspondence Graph for Person Re-Identification
Abstract
Although Person Re-Identification has made impressive progress, difficult cases like occlusion, change of view-point, and similar clothing still bring great challenges. In order to tackle these challenges, extracting discriminative feature representation is crucial. Most of the existing methods focus on extracting ReID features from individual images separately. However, when matching two images, we propose that the ReID features of a query image should be dynamically adjusted based on the contextual information from the gallery image it matches. We call this type of ReID features conditional feature embedding. In this paper, we propose a novel ReID framework that extracts conditional feature embedding based on the aligned visual clues between image pairs, called Clue Alignment based Conditional Embedding (CACE-Net). CACE-Net applies an attention module to build a detailed correspondence graph between crucial visual clues in image pairs and uses discrepancy-based GCN to embed the obtained complex correspondence information into the conditional features. The experiments show that CACE-Net achieves state-of-the-art performance on three public datasets
Year
DOI
Venue
2022
10.1109/TIP.2022.3206617
IEEE TRANSACTIONS ON IMAGE PROCESSING
Keywords
DocType
Volume
Feature extraction, Visualization, Transformers, Semantics, Fuses, Convolution, Sun, Person re-identification, dynamically adjust, conditional feature, clue alignment, discrepancy-based GCN
Journal
31
Issue
ISSN
Citations 
1
1057-7149
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Fufu Yu101.69
Xinyang Jiang2525.85
Yifei Gong313.05
Wei-Shi Zheng42915140.63
Feng Zheng536931.93
Sun Xing63310.94