Title
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.(1)
Year
Venue
Keywords
2020
JOURNAL OF MACHINE LEARNING RESEARCH
transfer learning,natural language processing,multi-task learning,attention-based models,deep learning
DocType
Volume
Issue
Journal
21
140
ISSN
Citations 
PageRank 
1532-4435
2
0.46
References 
Authors
0
9
Name
Order
Citations
PageRank
Colin Raffel119021.50
Noam Shazeer2108943.70
Roberts Adam320.46
Katherine J. Lee451.66
Narang Sharan520.46
Matena Michael620.46
Yanqi Zhou751.52
Li Wei820.46
Peter J. Liu926912.28