Title
A parallel vision approach to scene-specific pedestrian detection
Abstract
In recent years, with the development of computing power and deep learning algorithms, pedestrian detection has made great progress. Nevertheless, once a detection model trained on generic datasets (such as PASCAL VOC and MS COCO) is applied to a specific scene, its precision is limited by the distribution gap between the generic data and the specific scene data. It is difficult to train the model for a specific scene, due to the lack of labeled data from that scene. Even though we manage to get some labeled data from a specific scene, the changing environmental conditions make the pre-trained model perform bad. In light of these issues, we propose a parallel vision approach to scene-specific pedestrian detection. Given an object detection model, it is trained via two sequential stages: (1) the model is pre-trained on augmented-reality data, to address the lack of scene-specific training data; (2) the pre-trained model is incrementally optimized with newly synthesized data as the specific scene evolves over time. On publicly available datasets, our approach leads to higher precision than the models trained on generic data. To tackle the dynamically changing scene, we further evaluate our approach on the webcam data collected from Church Street Market Place, and the results are also encouraging.
Year
DOI
Venue
2020
10.1016/j.neucom.2019.03.095
Neurocomputing
Keywords
DocType
Volume
Pedestrian detection,Specific scene,Synthetic data,Video surveillance,Parallel vision
Journal
394
ISSN
Citations 
PageRank 
0925-2312
6
0.42
References 
Authors
0
5
Name
Order
Citations
PageRank
Wenwen Zhang181.12
Kunfeng Wang243426.42
Yating Liu3151.88
Yue Lu460.76
Fei-Yue Wang516121.26