Title
Disentangled Feature Learning Network and a Comprehensive Benchmark for Vehicle Re-Identification
Abstract
Vehicle Re-Identification (ReID) is of great significance for public security and intelligent transportation. Large and comprehensive datasets are crucial for the development of vehicle ReID in model training and evaluation. However, existing datasets in this field have limitations in many aspects, including the constrained capture conditions, limited variation of vehicle appearances, and small scale of training and test set, etc. Hence, a new, large, and challenging benchmark for vehicle ReID is urgently needed. In this paper, we propose a large vehicle ReID dataset, called VERI-Wild 2.0, containing 825,042 images. It is captured using a city-scale surveillance camera system, consisting of 274 cameras covering a very large area over 200 <inline-formula><tex-math notation="LaTeX">$km^2$</tex-math></inline-formula> . Specifically, the samples in our dataset present very rich appearance diversities thanks to the long time span collecting settings, unconstrained capturing viewpoints, various illumination conditions, diversified background environments, and different weather conditions. Furthermore, to facilitate more practical benchmarking, we define a challenging and large test set containing about 400K vehicle images that do not have any camera overlap with the training set. VERI-Wild 2.0 is expected to be able to facilitate the design, adaptation, development, and evaluation of different types of learning models for vehicle ReID. Besides, we also design a new method for vehicle ReID. We observe that orientation is a crucial factor for feature matching in vehicle ReID. To match vehicle pairs captured from similar orientations, the learned features are expected to capture specific detailed differential information for discriminating the visually similar yet different vehicles. In contrast, features are desired to capture the orientation invariant common information when matching samples captured from different orientations. Thus a novel disentangled feature learning network (DFNet) is proposed. It explicitly considers the orientation information for vehicle ReID, and concurrently learns the orientation specific and orientation common features that thus can be adaptively exploited via an adaptive matching scheme when dealing with matching pairs from similar or different orientations. The comprehensive experimental results show the effectiveness of our proposed method.
Year
DOI
Venue
2022
10.1109/TPAMI.2021.3099253
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords
DocType
Volume
Vehicle re-identification,vehicle dataset,disentangled learning
Journal
44
Issue
ISSN
Citations 
10
0162-8828
0
PageRank 
References 
Authors
0.34
7
5
Name
Order
Citations
PageRank
Yan Bai1102.55
Jun Liu267130.44
Lou Yihang3759.57
Ce Wang4239.20
Ling-yu Duan51770124.87