Title
Visualizing and Understanding the Effectiveness of BERT
Abstract
Language model pre-training, such as BERT, has achieved remarkable results in many NLP tasks. However, it is unclear why the pre-training-then-fine-tuning paradigm can improve performance and generalization capability across different tasks. In this paper, we propose to visualize loss landscapes and optimization trajectories of fine-tuning BERT on specific datasets. First, we find that pre-training reaches a good initial point across downstream tasks, which leads to wider optima and easier optimization compared with training from scratch. We also demonstrate that the fine-tuning procedure is robust to overfitting, even though BERT is highly over-parameterized for downstream tasks. Second, the visualization results indicate that fine-tuning BERT tends to generalize better because of the flat and wide optima, and the consistency between the training loss surface and the generalization error surface. Third, the lower layers of BERT are more invariant during fine-tuning, which suggests that the layers that are close to input learn more transferable representations of language.
Year
DOI
Venue
2019
10.18653/v1/D19-1424
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
D19-1
7
PageRank 
References 
Authors
0.49
0
4
Name
Order
Citations
PageRank
Yaru Hao171.17
Li Dong258231.86
Furu Wei31956107.57
Ke Xu4143399.79