Title
Revisiting Tri-training of Dependency Parsers.
Abstract
We compare two orthogonal semi-supervised learning techniques, namely tri-training and pretrained word embeddings, in the task of dependency parsing. We explore language-specific FastText and ELMo embeddings and multilingual BERT embeddings. We focus on a low resource scenario as semi-supervised learning can be expected to have the most impact here. Based on treebank size and available ELMo models, we select Hungarian, Uyghur (a zero-shot language for mBERT) and Vietnamese. Furthermore, we include English in a simulated low-resource setting. We find that pretrained word embeddings make more effective use of unlabelled data than tri-training but that the two approaches can be successfully combined.
Year
Venue
DocType
2021
EMNLP
Conference
Volume
Citations 
PageRank 
2021.emnlp-main
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Joachim Wagner11037.74
jennifer foster245438.25