Title
The MultiBERTs: BERT Reproductions for Robustness Analysis
Abstract
Experiments with pretrained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure (which includes the model architecture, training data, initialization scheme, and loss function). Recent work has shown that re-running pretraining can lead to substantially different conclusions about performance, suggesting that alternative evaluations are needed to make principled statements about procedures. To address this question, we introduce MultiBERTs: a set of 25 BERT-base checkpoints, trained with similar hyper-parameters as the original BERT model but differing in random initialization and data shuffling. The aim is to enable researchers to draw robust and statistically justified conclusions about pretraining procedures. The full release includes 25 fully trained checkpoints, as well as statistical guidelines and a code library implementing our recommended hypothesis testing methods. Finally, for five of these models we release a set of 28 intermediate checkpoints in order to support research on learning dynamics.
Year
Venue
Keywords
2022
International Conference on Learning Representations (ICLR)
Pre-trained models,BERT,bootstrapping,hypothesis testing,robustness
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
12
Name
Order
Citations
PageRank
Thibault Sellam1379.53
Steve Yadlowsky200.34
Jason Wei302.37
Naomi Saphra491.36
Alexander D'Amour500.34
tal linzen65214.82
Jasmijn Bastings700.68
Iulia Turc800.34
Jacob Eisenstein92098135.64
Dipanjan Das10161975.14
Ian Tenney1143.79
Ellie Pavlick1211621.07