Title
Do Transformer Modifications Transfer Across Implementations and Applications?
Abstract
The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.
Year
Venue
DocType
2021
EMNLP
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
16
Name
Order
Citations
PageRank
Sharan Narang100.68
Hyung Won Chung203.04
Yi Tay322928.97
William Fedus401.35
Thibault Fevry501.01
Michael Matena600.34
Karishma Malkan700.34
Noah Fiedel800.34
Noam Shazeer9108943.70
Zhenzhong Lan1047523.31
Yanqi Zhou1102.03
Wei Li12242.29
Nan Ding1300.68
Jake Marcus1400.34
Adam Roberts1500.34
Colin Raffel1619021.50