Title
GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing
Abstract
We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data. We construct synthetic question-SQL pairs over high-quality tables via a synchronous context-free grammar (SCFG). We pre-train our model on the synthetic data to inject important structural properties commonly found in semantic parsing into the pre-training language model. To maintain the model\u0027s ability to represent real-world data, we also include masked language modeling (MLM) on several existing table-related datasets to regularize our pre-training process. Our proposed pre-training strategy is much data-efficient. When incorporated with strong base semantic parsers, GraPPa achieves new state-of-the-art results on four popular fully supervised and weakly supervised table semantic parsing tasks.
Year
Venue
DocType
2021
ICLR
Conference
Citations 
PageRank 
References 
0
0.34
0
Authors
9
Name
Order
Citations
PageRank
Tao Yu1256.78
Chien-Sheng Wu23510.91
Victoria Lin3453.39
Bailin Wang400.34
Yi Chern Tan542.08
Xinyi Yang600.34
Dragomir Radev75167374.13
Richard Socher86770230.61
Caiming Xiong996969.56