Title
Unified Language Model Pre-training for Natural Language Understanding and Generation.
Abstract
This paper presents a new UNIfied pre-trained Language Model (UNILM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UNILM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UNILM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https //github.com/microsoft/unilm.
Year
Venue
Keywords
2019
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019)
natural language understanding
Field
DocType
Volume
Natural language generation,F1 score,Automatic summarization,Question answering,Computer science,Natural language understanding,Artificial intelligence,Encoder,Natural language processing,Generative grammar,Language model
Journal
32
ISSN
Citations 
PageRank 
1049-5258
8
0.44
References 
Authors
0
9
Name
Order
Citations
PageRank
Li Dong158231.86
Nan Yang258322.70
Wenhui Wang31356.52
Furu Wei41956107.57
Xiaodong Liu513517.46
Yu Wang62279211.60
Jianfeng Gao75729296.43
Ming Zhou84262251.74
Hsiao-Wuen Hon91719354.37