Title
PAD-Net: Multi-tasks Guided Prediction-and-Distillation Network for Simultaneous Depth Estimation and Scene Parsing
Abstract
Depth estimation and scene parsing are two particularly important tasks in visual scene understanding. In this paper we tackle the problem of simultaneous depth estimation and scene parsing in a joint CNN. The task can be typically treated as a deep multi-task learning problem [42]. Different from previous methods directly optimizing multiple tasks given the input training data, this paper proposes a novel multi-task guided prediction-and-distillation network (PAD-Net), which first predicts a set of intermediate auxiliary tasks ranging from low level to high level, and then the predictions from these intermediate auxiliary tasks are utilized as multi-modal input via our proposed multi-modal distillation modules for the final tasks. During the joint learning, the intermediate tasks not only act as supervision for learning more robust deep representations but also provide rich multi-modal information for improving the final tasks. Extensive experiments are conducted on two challenging datasets (i.e. NYUD-v2 and Cityscapes) for both the depth estimation and scene parsing tasks, demonstrating the effectiveness of the proposed approach.
Year
DOI
Venue
2018
10.1109/CVPR.2018.00077
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Keywords
DocType
Volume
PAD-net,simultaneous depth estimation,particularly important tasks,visual scene understanding,deep multitask learning problem,intermediate auxiliary tasks,multimodal input,multimodal distillation modules,intermediate tasks,scene parsing tasks,multimodal information,multitasks guided prediction-and-distillation network
Conference
abs/1805.04409
ISSN
ISBN
Citations 
1063-6919
978-1-5386-6421-6
20
PageRank 
References 
Authors
0.62
40
4
Name
Order
Citations
PageRank
Dan Xu134216.39
Wanli Ouyang22371105.17
Xiaogang Wang39647386.70
Nicu Sebe47013403.03