Abstract | ||
---|---|---|
In this paper, we enhance the attention-based neural machine translation (NMT) by adding explicit coverage embedding models to alleviate issues of repeating and dropping translations in NMT. For each source word, our model starts with a full coverage embedding vector to track the coverage status, and then keeps updating it with neural networks as the translation goes. Experiments on the large-scale Chinese-to-English task show that our enhanced model improves the translation quality significantly on various test sets over the strong large vocabulary NMT system. |
Year | DOI | Venue |
---|---|---|
2016 | 10.18653/v1/D16-1096 | EMNLP |
Field | DocType | Volume |
Embedding,Computer science,Machine translation,Speech recognition,Natural language processing,Artificial intelligence,Artificial neural network,Vocabulary,Machine learning | Conference | D16-1 |
Citations | PageRank | References |
30 | 1.29 | 12 |
Authors | ||
4 |
Name | Order | Citations | PageRank |
---|---|---|---|
Haitao Mi | 1 | 479 | 23.40 |
Baskaran Sankaran | 2 | 155 | 13.65 |
Zhiguo Wang | 3 | 354 | 24.64 |
Abe Ittycheriah | 4 | 318 | 22.92 |