Abstract | ||
---|---|---|
In recent years, researchers tend to pre-train ever-larger language models to explore the upper limit of deep models. However, large language model pre-training costs intensive computational resources, and most of the models are trained from scratch without reusing the existing pre-trained models, which is wasteful. In this paper, we propose bert2BERT1, which can effectively transfer the knowledge of an existing smaller pre-trained model to a large model through parameter initialization and significantly improve the pre-training efficiency of the large model. Specifically, we extend the previous function-preserving (Chen et al., 2016) method proposed in computer vision on the Transformer-based language model, and further improve it by proposing a novel method, advanced knowledge for the large model's initialization. In addition, a two-stage learning method is proposed to further accelerate the pre-training. We conduct extensive experiments on representative PLMs (e.g., BERT and GPT) and demonstrate that (1) our method can save a significant amount of training cost compared with baselines including learning from scratch, StackBERT (Gong et al., 2019) and MSLT (Yang et al., 2020); (2) our method is generic and applicable to different types of pretrained models. In particular, bert2BERT saves about 45% and 47% computational cost of pretraining BERTBASE and GPTBASE by reusing the models of almost their half sizes. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.acl-long.151 | PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS) |
DocType | Volume | Citations |
Conference | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) | 0 |
PageRank | References | Authors |
0.34 | 0 | 10 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cheng Chen | 1 | 1 | 1.03 |
Yichun Yin | 2 | 27 | 2.58 |
Lifeng Shang | 3 | 485 | 30.96 |
Xin Jiang | 4 | 150 | 32.43 |
Yujia Qin | 5 | 0 | 1.01 |
Fengyu Wang | 6 | 15 | 5.76 |
Zhi Wang | 7 | 0 | 0.34 |
Xiao Chen | 8 | 0 | 1.35 |
Zhiyuan Liu | 9 | 2037 | 123.68 |
Qun Liu | 10 | 2149 | 203.11 |