Title
Adaptively sharing multi-levels of distributed representations in multi-task learning
Abstract
In multi-task learning, the performance is often sensitive to the relationships between tasks. Thus it is important to study how to exploit the complex relationships across different tasks. One line of research captures the complex task relationships, by increasing the model capacity and thus requiring a large training dataset. However in many real-world applications, the amount of labeled data is limited. In this paper, we propose a light weight and specially designed architecture, which aims to model task relationships for small or middle-sized datasets. The proposed framework learns a task-specific ensemble of sub-networks in different depths, and is able to adapt the model architecture for the given data. The task-specific ensemble parameters are learned simultaneously with the weights of the network by optimizing a single loss function defined with respect to the end task. The hierarchical model structure is able to share both general and specific distributed representations to capture the inherent relationships between tasks. We validate our approach on various types of tasks, including synthetic task, article recommendation task and vision task. The results demonstrate the advantages of our model over several competitive baselines especially when the tasks are less-related.
Year
DOI
Venue
2022
10.1016/j.ins.2022.01.035
Information Sciences
Keywords
DocType
Volume
Multi-task learning,Deep learning,Machine learning
Journal
591
ISSN
Citations 
PageRank 
0020-0255
0
0.34
References 
Authors
8
8
Name
Order
Citations
PageRank
Tianxin Wang100.34
Fuzhen Zhuang282775.28
Ying Sun300.34
Xiangliang Zhang400.34
Leyu Lin500.34
Feng Xia600.34
Lei He700.34
Qing He800.34