Abstract | ||
---|---|---|
In this paper, we propose a novel finetuning algorithm for the recently introduced multi-way, mulitlingual neural machine translate that enables zero-resource machine translation. When used together with novel many-to-one translation strategies, we empirically show that this finetuning algorithm allows the multi-way, multilingual model to translate a zero-resource language pair (1) as well as a single-pair neural translation model trained with up to 1M direct parallel sentences of the same language pair and (2) better than pivot-based translation strategy, while keeping only one additional copy of attention-related parameters. |
Year | DOI | Venue |
---|---|---|
2016 | 10.18653/v1/D16-1026 | EMNLP |
DocType | Volume | Citations |
Conference | abs/1606.04164 | 27 |
PageRank | References | Authors |
0.97 | 20 | 5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Orhan Firat | 1 | 281 | 29.13 |
Baskaran Sankaran | 2 | 155 | 13.65 |
Yaser Al-Onaizan | 3 | 540 | 38.51 |
Fatos T. Yarman-Vural | 4 | 287 | 27.11 |
Kyunghyun Cho | 5 | 6803 | 316.85 |