Title
Two-Layer Residual Feature Fusion For Object Detection
Abstract
Recently, a lot of single stage detectors using multi-scale features have been actively proposed. They are much faster than two stage detectors that use region proposal networks (RPN) without much degradation in the detection performances. However, the feature maps in the lower layers close to the input which are responsible for detecting small objects in a single stage detector have a problem of insufficient representation power because they are too shallow. There is also a structural contradiction that the feature maps not only have to deliver low-level information to next layers but also have to contain high-level abstraction for prediction. In this paper, we propose a method to enrich the representation power of feature maps using a new feature fusion method which makes use of the information from the consecutive layer. It also adopts a unified prediction module which has an enhanced generalization performance. The proposed method enables more precise prediction, which achieved higher or compatible score than other competitors such as SSD and DSSD on PASCAL VOC and MS COCO. In addition, it maintains the advantage of fast computation of a single stage detector, which requires much less computation than other detectors with similar performance.
Year
DOI
Venue
2019
10.5220/0007306803520359
ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS
Keywords
Field
DocType
Object Detection, Computer Vision, Machine Learning, Neural Network, Deep Learning
Object detection,Residual,Feature fusion,Pattern recognition,Computer science,Artificial intelligence,Deep learning,Artificial neural network,Detector,Computation
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
4
Name
Order
Citations
PageRank
Jaeseok Choi151.07
Kyoungmin Lee200.34
Jisoo Jeong352.08
Nojun Kwak486263.79