Title
Inspecting Unification of Encoding and Matching with Transformer: A Case Study of Machine Reading Comprehension
Abstract
Most machine reading comprehension (MRC) models separately handle encoding and matching with different network architectures. In contrast, pretrained language models with Transformer layers, such as GPT (Radford et al., 2018) and BERT (Devlin et al., 2018), have achieved competitive performance on MRC. A research question that naturally arises is: apart from the benefits of pre-training, how many performance gain comes from the unified network architecture. In this work, we evaluate and analyze unifying encoding and matching components with Transformer for the MRC task. Experimental results on SQuAD show that the unified model outperforms previous networks that separately treat encoding and matching. We also introduce a metric to inspect whether a Transformer layer tends to perform encoding or matching. The analysis results show that the unified model learns different modeling strategies compared with previous manually-designed models.
Year
DOI
Venue
2019
10.18653/v1/D19-5802
MRQA@EMNLP
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
8
Name
Order
Citations
PageRank
Hangbo Bao1183.42
Li Dong258231.86
Furu Wei31956107.57
Wenhui Wang41356.52
Nan Yang558322.70
Lizhen Cui615438.68
Songhao Piao711.76
Ming Zhou84262251.74