Title
Integrating Prior Translation Knowledge Into Neural Machine Translation
Abstract
AbstractNeural machine translation (NMT), which is an encoder-decoder joint neural language model with an attention mechanism, has achieved impressive results on various machine translation tasks in the past several years. However, the language model attribute of NMT tends to produce fluent yet sometimes unfaithful translations, which hinders the improvement of translation capacity. In response to this problem, we propose a simple and efficient method to integrate prior translation knowledge into NMT in a universal manner that is compatible with neural networks. Meanwhile, it enables NMT to consider the crossing language translation knowledge from the source-side of the training pipeline of NMT, thereby making full use of the prior translation knowledge to enhance the performance of NMT. The experimental results on two large-scale benchmark translation tasks demonstrated that our approach achieved a significant improvement over a strong baseline.
Year
DOI
Venue
2022
10.1109/TASLP.2021.3138714
IEEE/ACM Transactions on Audio, Speech and Language Processing
Keywords
DocType
Volume
Machine translation, Knowledge representation, Training, Transformers, Speech processing, Decoding, Task analysis, Bilingual lexicon knowledge, prior knowledge representation, self-attention networks, machine translation
Journal
10.5555
Issue
ISSN
Citations 
taslp.2022.issue-30
2329-9290
0
PageRank 
References 
Authors
0.34
11
4
Name
Order
Citations
PageRank
Kehai Chen14316.34
Rui Wang249230.72
Masao Utiyama371486.69
Eiichiro SUMITA41466190.87