Title
Maximum Bayes Smatch Ensemble Distillation for AMR Parsing
Abstract
AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning. Self-learning techniques have also played a role in pushing performance forward. However, for most recent high performant parsers, the effect of self-learning and silver data generation seems to be fading. In this paper we show that it is possible to overcome this diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation. In an extensive experimental setup, we push single model English parser performance above 85 Smatch for the first time and return to substantial gains. We also attain a new state-of-the-art for cross-lingual AMR parsing for Chinese, German, Italian and Spanish. Finally we explore the impact of the proposed distillation technique on domain adaptation, and show that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.
Year
DOI
Venue
2022
10.18653/V1/2022.NAACL-MAIN.393
North American Chapter of the Association for Computational Linguistics (NAACL)
DocType
Citations 
PageRank 
Conference
0
0.34
References 
Authors
0
6
Name
Order
Citations
PageRank
Young-Suk Lee126425.78
Ramon Fernadez Astudillo2246.40
Thanh Lam Hoang300.34
Tahira Naseem413.19
Radu Florian592491.44
Salim Roukos66248845.50