Title
Towards Zero-shot Language Modelling
Abstract
Can we construct a neural model that is inductively biased towards learning human languages? Motivated by this question, we aim at constructing an informative prior over neural weights, in order to adapt quickly to held-out languages in the task of character-level language modeling. We infer this distribution from a sample of typologically diverse training languages via Laplace approximation. The use of such a prior outperforms baseline models with an uninformative prior (so-called "fine-tuning") in both zero-shot and few-shot settings. This shows that the prior is imbued with universal phonological knowledge. Moreover, we harness additional language-specific side information as distant supervision for held-out languages. Specifically, we condition language models on features from typological databases, by concatenating them to hidden states or generating weights with hyper-networks. These features appear beneficial in the few-shot setting, but not in the zero-shot setting. Since the paucity of digital texts affects the majority of the world's languages, we hope that these findings will help broaden the scope of applications for language technology.
Year
DOI
Venue
2019
10.18653/v1/D19-1288
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
aclanthology.org
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Edoardo Maria Ponti154.47
Ivan Vulic246252.59
Ryan Cotterell3012.51
Roi Reichart476053.53
Anna Korhonen5133692.50