Title
NAS-BERT: Task-Agnostic and Adaptive-Size BERT Compression with Neural Architecture Search
Abstract
ABSTRACTWhile pre-trained language models (e.g., BERT) have achieved impressive results on different natural language processing tasks, they have large numbers of parameters and suffer from big computational and memory costs, which make them difficult for real-world deployment. Therefore, model compression is necessary to reduce the computation and memory cost of pre-trained models. In this work, we aim to compress BERT and address the following two challenging practical issues: (1) The compression algorithm should be able to output multiple compressed models with different sizes and latencies, in order to support devices with different memory and latency limitations; (2) The algorithm should be downstream task agnostic, so that the compressed models are generally applicable for different downstream tasks. We leverage techniques in neural architecture search (NAS) and propose NAS-BERT, an efficient method for BERT compression. NAS-BERT trains a big supernet on a carefully designed search space containing a variety of architectures and outputs multiple compressed models with adaptive sizes and latency. Furthermore, the training of NAS-BERT is conducted on standard self-supervised pre-training tasks (e.g., masked language model) and does not depend on specific downstream tasks. Thus, the compressed models can be used across various downstream tasks. The technical challenge of NAS-BERT is that training a big supernet on the pre-training task is extremely costly. We employ several techniques including block-wise search, search space pruning, and performance approximation to improve search efficiency and accuracy. Extensive experiments on GLUE and SQuAD benchmark datasets demonstrate that NAS-BERT can find lightweight models with better accuracy than previous approaches, and can be directly applied to different downstream tasks with adaptive model sizes for different requirements of memory or latency.
Year
DOI
Venue
2021
10.1145/3447548.3467262
Knowledge Discovery and Data Mining
Keywords
DocType
Citations 
BERT compression, task-agnostic, adaptive, neural architecture search, pre-training
Conference
2
PageRank 
References 
Authors
0.42
0
7
Name
Order
Citations
PageRank
Jin Xu163.22
Xu Tan28823.94
Renqian Luo3283.58
Kaitao Song474.26
Jian Li5214.50
Tao Qin62384147.25
Tie-yan Liu74662256.32