Title
Polyglot Contextual Representations Improve Crosslingual Transfer.
Abstract
We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual word representations with those of multilingual representation learning. We produce language models from dissimilar language pairs (English/Arabic and English/Chinese) and use them in dependency parsing, semantic role labeling, and named entity recognition, with comparisons to monolingual and non-contextual variants. Our results provide further evidence for the benefits of polyglot learning, in which representations are shared across multiple languages.
Year
DOI
Venue
2019
10.18653/v1/n19-1392
North American Chapter of the Association for Computational Linguistics
Field
DocType
Volume
Arabic,Polyglot,Computer science,Dependency grammar,Natural language processing,Artificial intelligence,Named-entity recognition,Language model,Feature learning,Semantic role labeling
Journal
abs/1902.09697
Citations 
PageRank 
References 
1
0.35
20
Authors
3
Name
Order
Citations
PageRank
Phoebe Mulcaire131.40
Jungo Kasai273.85
Noah A. Smith35867314.27