Title
Translate-To-Recognize Networks For Rgb-D Scene Recognition
Abstract
Cross-modal transfer is helpful to enhance modality-specific discriminative power for scene recognition. To this end, this paper presents a unified framework to integrate the tasks of cross-modal translation and modality-specific recognition, termed as Translate-to-Recognize Network (TRecgNet). Specifically, both translation and recognition tasks share the same encoder network, which allows to explicitly regularize the training of recognition task with the help of translation, and thus improve its final generalization ability. For translation task, we place a decoder module on top of the encoder network and it is optimized with a new layer-wise semantic loss, while for recognition task, we use a linear classifier based on the feature embedding from encoder and its training is guided by the standard cross-entropy loss. In addition, our TRecgNet allows to exploit large numbers of unlabeled RGB-D data to train the translation task and thus improve the representation power of encoder network. Empirically, we verify that this new semi-supervised setting is able to further enhance the performance of recognition network. We perform experiments on two RGB-D scene recognition benchmarks: NYU Depth v2 and SUN RGB-D, demonstrating that TRecgNet achieves superior performance to the existing state-of-the-art methods, especially for recognition solely based on a single modality.
Year
DOI
Venue
2019
10.1109/CVPR.2019.01211
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019)
Field
DocType
Volume
Computer vision,Pattern recognition,Computer science,RGB color model,Artificial intelligence
Journal
abs/1904.12254
ISSN
Citations 
PageRank 
1063-6919
3
0.36
References 
Authors
0
5
Name
Order
Citations
PageRank
Dapeng Du152.09
LiMin Wang281648.41
Huiling Wang330.70
Kai Zhao430.36
Gang-Shan Wu5276.75