Title
Learning to Sample Replacements for ELECTRA Pre-Training.
Abstract
ELECTRA pretrains a discriminator to detect replaced tokens, where the replacements are sampled from a generator trained with masked language modeling. Despite the compelling performance, ELECTRA suffers from the following two issues. First, there is no direct feedback loop from discriminator to generator, which renders replacement sampling inefficient. Second, the generator's prediction tends to be over-confident along with training, making replacements biased to correct tokens. In this paper, we propose two methods to improve replacement sampling for ELECTRA pre-training. Specifically, we augment sampling with a hardness prediction mechanism, so that the generator can encourage the discriminator to learn what it has not acquired. We also prove that efficient sampling reduces the training variance of the discriminator. Moreover, we propose to use a focal loss for the generator in order to relieve oversampling of correct tokens as replacements. Experimental results show that our method improves ELECTRA pre-training on various downstream tasks.
Year
Venue
DocType
2021
ACL/IJCNLP
Conference
Volume
Citations 
PageRank 
2021.findings-acl
0
0.34
References 
Authors
0
5
Name
Order
Citations
PageRank
Yaru Hao171.17
Li Dong258231.86
Hangbo Bao3183.42
Ke Xu4143399.79
Furu Wei51956107.57