Title
Coarse-to-Fine: Hierarchical Multi-task Learning for Natural Language Understanding.
Abstract
Generalized text representations are the foundation of many natural language understanding tasks. To fully utilize the different corpus, it is inevitable that models need to understand the relevance among them. However, many methods ignore the relevance and adopt a single-channel model (a coarse paradigm) directly for all tasks, which lacks enough rationality and interpretation. In addition, some existing works learn downstream tasks by stitches skill block (a fine paradigm), which might cause irrational results due to its redundancy and noise. In this work, we first analyze the task correlation through three different perspectives, , data property, manual design, and model-based relevance, based on which the similar tasks are grouped together. Then, we propose a hierarchical framework with a coarse-to-fine paradigm, with the bottom level shared to all the tasks, the mid-level divided to different groups, and the top-level assigned to each of the tasks. This allows our model to learn basic language properties from all tasks, boost performance on relevant tasks, and reduce the negative impact from irrelevant tasks. Our experiments on 13 benchmark datasets across five natural language understanding tasks demonstrate the superiority of our method.
Year
Venue
DocType
2022
International Conference on Computational Linguistics
Conference
Volume
Citations 
PageRank 
Proceedings of the 29th International Conference on Computational Linguistics
0
0.34
References 
Authors
0
12
Name
Order
Citations
PageRank
Zhaoye Fei100.34
Yu Tian203.04
Yongkang Wu300.34
Xinyu Zhang483.53
Yutao Zhu501.01
Liu Zheng64712.80
Jiawen Wu700.68
Dejiang Kong841.74
Ruofei Lai900.34
Zhao Cao1063.85
Zhicheng Dou1170641.96
Xipeng Qiu1255663.33