Abstract | ||
---|---|---|
Traditional neural machine translation NMT methods use the word-level context to predict target language translation while neglecting the sentence-level context, which has been shown to be beneficial for translation prediction in statistical machine translation. This paper represents the sentence-level context as latent topic representations by using a convolution neural network, and designs a topic attention to integrate source sentence-level topic context information into both attention-based and Transformer-based NMT. In particular, our method can improve the performance of NMT by modeling source topics and translations jointly. Experiments on the large-scale LDC Chinese-to-English translation tasks and WMT’14 English-to-German translation tasks show that the proposed approach can achieve significant improvements compared with baseline systems.
|
Year | DOI | Venue |
---|---|---|
2019 | 10.1109/TASLP.2019.2937190 | IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) |
Keywords | Field | DocType |
Task analysis,Speech processing,Decoding,Computer architecture,Convolution,Feature extraction,Context modeling | Language translation,Convolutional neural network,Computer science,Machine translation,Speech recognition,Sentence | Journal |
Volume | Issue | ISSN |
27 | 12 | 2329-9290 |
Citations | PageRank | References |
4 | 0.40 | 14 |
Authors | ||
5 |
Name | Order | Citations | PageRank |
---|---|---|---|
Kehai Chen | 1 | 43 | 16.34 |
Rui Wang | 2 | 19 | 2.68 |
Masao Utiyama | 3 | 714 | 86.69 |
Eiichiro SUMITA | 4 | 1466 | 190.87 |
Tiejun Zhao | 5 | 643 | 102.68 |