Title
An End-to-End Scalable Iterative Sequence Tagging with Multi-Task Learning.
Abstract
Multi-task learning (MTL) models, which pool examples arisen out of several tasks, have achieved remarkable results in language processing. However, multi-task learning is not always effective when compared with the single-task methods in sequence tagging. One possible reason is that existing methods to multi-task sequence tagging often reply on lower layer parameter sharing to connect different tasks. The lack of interactions between different tasks results in limited performance improvement. In this paper, we propose a novel multi-task learning architecture which could iteratively utilize the prediction results of each task explicitly. We train our model for part-of-speech (POS) tagging, chunking and named entity recognition (NER) tasks simultaneously. Experimental results show that without any task-specific features, our model obtains the state-of-the-art performance on both chunking and NER.
Year
DOI
Venue
2018
10.1007/978-3-319-99501-4_25
Lecture Notes in Artificial Intelligence
Keywords
Field
DocType
Multi-task learning,Interactions,Sequence tagging
Multi-task learning,End-to-end principle,Learning architecture,Computer science,Chunking (psychology),Artificial intelligence,Named-entity recognition,Machine learning,Performance improvement,Scalability
Conference
Volume
ISSN
Citations 
11109
0302-9743
1
PageRank 
References 
Authors
0.40
14
6
Name
Order
Citations
PageRank
Lin Gui1186.43
Du Jiachen2369.02
Zhishan Zhao330.75
Yulan He41934123.88
Xu Ruifeng543253.04
Chuang Fan641.45