Title
A loss-balanced multi-task model for simultaneous detection and segmentation
Abstract
Scene understanding comes in many flavors, two of the most popular being object detection and semantic segmentation, which act as two important aspects for scene understanding, and are applied to many areas, such as autonomous driving and intelligent surveillance. Although much progress has already been made, the two tasks of object detection and semantic segmentation are often investigated independently. In practice, scene understanding is complicated, and comprises many sub-tasks, so that research of learning multiple tasks simultaneously with a single model is feasible. With the interrelated goals of these two tasks, there is a strong motivation to improve the object detection accuracy with the help of semantic segmentation, and vice versa. In this paper, we propose a loss-balanced multi-task model for simultaneous object detection and semantic segmentation. We explore multi-task learning with sharing parameters based on deep learning to realize improved object detection and segmentation, and propose a single-stage deep architecture based on multi-task learning, jointly performing object detection and semantic segmentation to boost each other. With no more computation load in the inference compared with the baselines of SSD and FCN, we show that these two tasks, object detection and semantic segmentation, benefit from each other. Experimental results on Pascal VOC and COCO show that our method improves much in object detection and semantic segmentation compared with the corresponding baselines of both tasks.
Year
DOI
Venue
2021
10.1016/j.neucom.2020.11.024
Neurocomputing
Keywords
DocType
Volume
Object detection,Semantic segmentation,Multi-task learning,Scene understanding
Journal
428
ISSN
Citations 
PageRank 
0925-2312
1
0.35
References 
Authors
0
5
Name
Order
Citations
PageRank
Wenwen Zhang181.12
Kunfeng Wang2263.58
Yutong Wang3112.19
Lan Yan4127.91
Fei-Yue Wang516121.26