Title
TSingNet: Scale-aware and context-rich feature learning for traffic sign detection and recognition in the wild
Abstract
Traffic sign detection and recognition in the wild is a challenging task. Existing techniques are often incapable of detecting small or occluded traffic signs because of the scale variation and context loss, which causes semantic gaps between multiple scales. We propose a new traffic sign detection network (TSingNet), which learns scale-aware and context-rich features to effectively detect and recognize small and occluded traffic signs in the wild. Specifically, TSingNet first constructs an attention-driven bilateral feature pyramid network, which draws on both bottom-up and top-down subnets to dually circulate low-, mid-, and high-level foreground semantics in scale self-attention learning. This is to learn scale-aware foreground features and thus narrow down the semantic gaps between multiple scales. An adaptive receptive field fusion block with variable dilation rates is then introduced to exploit context-rich representation and suppress the influence of occlusion at each scale. TSingNet is end-to-end trainable by joint minimization of the scale-aware loss and multi-branch fusion losses, this adds a few parameters but significantly improves the detection performance. In extensive experiments with three challenging traffic sign datasets (TT100K, STSD and DFG), TSingNet outperformed state-of-the-art methods for traffic sign detection and recognition in the wild.
Year
DOI
Venue
2021
10.1016/j.neucom.2021.03.049
Neurocomputing
Keywords
DocType
Volume
Traffic sign detection and recognition,Scale-aware and context-rich feature learning,Attention-driven bilateral feature pyramid network,Adaptive receptive field,Scale variation and occlusion
Journal
447
ISSN
Citations 
PageRank 
0925-2312
1
0.35
References 
Authors
0
5
Name
Order
Citations
PageRank
Yuanyuan Liu1122.22
Jiyao Peng210.35
Jing-Hao Xue31510.05
Yong-Quan Chen443.12
Zhang-Hua Fu5276.49