Abstract | ||
---|---|---|
Current pre-trained language models (PLM) are typically trained with static data, ignoring that in real-world scenarios, streaming data of various sources may continuously grow. This requires PLMs to integrate the information from all the sources in a lifelong manner. Although this goal could be achieved by exhaustive pre-training on all the existing data, such a process is known to be computationally expensive. To this end, we propose ELLE, aiming at efficient lifelong pre-training for emerging data. Specifically, ELLE consists of (1) function preserved model expansion, which flexibly expands an existing PLM's width and depth to improve the efficiency of knowledge acquisition; and (2) pre-trained domain prompts, which disentangle the versatile knowledge learned during pretraining and stimulate the proper knowledge for downstream tasks. We experiment ELLE with streaming data from 5 domains on BERT and GPT. The results show the superiority of ELLE over various lifelong learning baselines in both pre-training efficiency and downstream performances. |
Year | DOI | Venue |
---|---|---|
2022 | 10.18653/v1/2022.findings-acl.220 | FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022) |
DocType | Volume | Citations |
Conference | Findings of the Association for Computational Linguistics: ACL 2022 | 0 |
PageRank | References | Authors |
0.34 | 0 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yujia Qin | 1 | 0 | 0.34 |
Jiajie Zhang | 2 | 0 | 0.34 |
Yankai Lin | 3 | 607 | 28.37 |
Zhiyuan Liu | 4 | 2037 | 123.68 |
Peng Li | 5 | 146 | 21.34 |
Maosong Sun | 6 | 2293 | 162.86 |
Jie Zhou | 7 | 13 | 11.09 |