Abstract | ||
---|---|---|
Task-agnostic knowledge distillation, a teacher-student framework, has been proved effective for BERT compression. Although achieving promising results on NLP tasks, it requires enormous computational resources. In this paper, we propose Extract Then Distill (ETD), a generic and flexible strategy to reuse the teacher's parameters for efficient and effective task-agnostic distillation, which can be applied to students of any size. Specifically, we introduce two variants of ETD, ETD-R and and ETD-Impt, which extract the teacher's parameters in a random manner and by following an importance metric, respectively. In this way, the student has already acquired some knowledge at the beginning of the distillation process, which makes the distillation process converge faster. We demonstrate the effectiveness of ETD on the GLUE benchmark and SQuAD. The experimental results show that: (1) compared with the baseline without an ETD strategy, ETD can save 70% of computation cost. Moreover, it achieves better results than the baseline when using the same computing resource. (2) ETD is generic and has been proven effective for different distillation methods (e.g., TinyBERT and MiniLM) and students of different sizes. Code is available at https://github.com/huawei-noah/Pretrained-Language-Model. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-86365-4_46 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT III |
Keywords | DocType | Volume |
BERT, Knowledge distillation, Structured pruning | Conference | 12893 |
ISSN | Citations | PageRank |
0302-9743 | 0 | 0.34 |
References | Authors | |
1 | 7 |
Name | Order | Citations | PageRank |
---|---|---|---|
Cheng Chen | 1 | 1 | 1.03 |
Yichun Yin | 2 | 27 | 2.58 |
Lifeng Shang | 3 | 485 | 30.96 |
Wang Zhi | 4 | 461 | 51.72 |
Xin Jiang | 5 | 150 | 32.43 |
Xi Chen | 6 | 333 | 70.76 |
Qun Liu | 7 | 2149 | 203.11 |