Abstract | ||
---|---|---|
We offer a simple and effective method to seek a better balance between model confidence and length preference for Neural Machine Translation (NMT). Unlike the popular length normalization and coverage models, our model does not require training nor reranking the limited n-best outputs. Moreover, it is robust to large beam sizes, which is not well studied in previous work. On the Chinese-English and English-German translation tasks, our approach yields +0.4 similar to similar to 1.5 BLEU improvements over the state-of-the-art baselines. |
Year | Venue | Field |
---|---|---|
2018 | PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2 | Computer science,Machine translation,Artificial intelligence,Natural language processing |
DocType | Volume | Citations |
Conference | P18-2 | 0 |
PageRank | References | Authors |
0.34 | 0 | 6 |
Name | Order | Citations | PageRank |
---|---|---|---|
Yanyang Li | 1 | 3 | 1.42 |
Tong Xiao | 2 | 131 | 23.91 |
Yinqiao Li | 3 | 6 | 2.47 |
Qiang Wang | 4 | 3 | 0.72 |
Changming Xu | 5 | 0 | 0.34 |
Jingbo Zhu | 6 | 703 | 64.21 |