Title
Spatiotemporal Feature Extraction For Pedestrian Re-Identification
Abstract
Video-based person re-identification (ReID) is a problem of person retrieval that aims to match the same person in two different videos, which has gradually entered the arena of public security. The system generally involve three important parts: feature extraction, feature aggregation and loss function. Pedestrian feature extraction and aggregation are critical steps in this field. Most of the previous studies concentrate on designing various feature extractors. However, these extractors cannot effectively extract spatiotemporal information. In this paper, several spatiotemporal convolution blocks were proposed to optimize the feature extraction model of person Re-identification. Firstly, 2D convolution and 3D convolution are simultaneously used on video volume to extract spatiotemporal feature. Secondly, non-local block is embedded into ResNet3D-50 to capture long-range dependencies. As a result, the proposed model could learn the inner link of pedestrian action in a video. Experimental results on MARS dataset show that our model has achieved significant progress compared to state-of-the-art methods.
Year
DOI
Venue
2019
10.1007/978-3-030-23597-0_15
WIRELESS ALGORITHMS, SYSTEMS, AND APPLICATIONS, WASA 2019
Keywords
Field
DocType
ReID, Spatiotemporal feature, Mixed convolution, Non-local block
Mars Exploration Program,Pedestrian,Pattern recognition,Convolution,Computer science,Feature extraction,Artificial intelligence,Feature aggregation,Public security,Distributed computing
Conference
Volume
ISSN
Citations 
11604
0302-9743
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ye Li1617.26
Guangqiang Yin225.79
Shaoqi Hou301.35
Jianhai Cui400.34
Zicheng Huang500.34