Title
Um-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation
Abstract
Aiming towards human-level generalization, there is a need to explore adaptable representation learning methods with greater transferability. Most existing approaches independently address task-transferability and cross-domain adaptation, resulting in limited generalization. In this paper, we propose UM-Adapt - a unified framework to effectively perform unsupervised domain adaptation for spatially-structured prediction tasks, simultaneously maintaining a balanced performance across individual tasks in a multi-task setting. To realize this, we propose two novel regularization strategies; a) Contour-based content regularization (CCR) and b) exploitation of inter-task coherency using a cross-task distillation module. Furthermore, avoiding a conventional ad-hoc domain discriminator, we re-utilize the cross-task distillation loss as output of an energy function to adversarially minimize the input domain discrepancy. Through extensive experiments, we demonstrate superior generalizability of the learned representations simultaneously for multiple tasks under domain-shifts from synthetic to natural environments. UM-Adapt yields state-of-the-art transfer learning results on ImageNet classification and comparable performance on PASCAL VOC 2007 detection task, even with a smaller backbone-net. Moreover, the resulting semi-supervised framework outperforms the current fully-supervised multi-task learning state-of-the-art on both NYUD and Cityscapes dataset.
Year
DOI
Venue
2019
10.1109/ICCV.2019.00152
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)
Field
DocType
Volume
Computer vision,Computer science,Task adaptation,Distillation,Artificial intelligence,Adversarial system
Conference
2019
Issue
ISSN
Citations 
1
1550-5499
6
PageRank 
References 
Authors
0.43
3
3
Name
Order
Citations
PageRank
Jogendra Nath Kundu1146.29
Nishank Lakkakula260.43
R. Venkatesh Babu3104684.83