Title
Detecting Dense Text In Natural Images
Abstract
Most existing text detection methods are mainly motivated by deep learning-based object detection approaches, which may result in serious overlapping between detected text lines, especially in dense text scenarios. It is because text boxes are not commonly overlapped, as different from general objects in natural scenes. Moreover, text detection requires higher localisation accuracy than object detection. To tackle these problems, the authors propose a novel dense text detection network (DTDN) to localise tighter text lines without overlapping. Their main novelties are: (i) propose an intersection-over-union overlap loss, which considers correlations between one anchor and GT boxes and measures how many text areas one anchor contains, (ii) propose a novel anchor sample selection strategy, named CMax-OMin, to select tighter positive samples for training. CMax-OMin strategy not only considers whether an anchor has the largest overlap with its corresponding GT box (CMax), but also ensures the overlapping between one anchor and other GT boxes as little as possible (OMin). Besides, they train a bounding-box regressor as post-processing to further improve text localisation performance. Experiments on scene text benchmark datasets and their proposed dense text dataset demonstrate that the proposed DTDN achieves competitive performance, especially for dense text scenarios.
Year
DOI
Venue
2020
10.1049/iet-cvi.2019.0916
IET COMPUTER VISION
DocType
Volume
Issue
Journal
14
8
ISSN
Citations 
PageRank 
1751-9632
0
0.34
References 
Authors
0
7
Name
Order
Citations
PageRank
Shengsheng Zhang100.34
Dianzhuan Jiang200.34
Yaping Huang310821.45
Qi Zou44313.59
Xingyuan Zhang532.41
Mengyang Pu661.44
Junbo Liu721.05