Title
Towards Understanding Neural Machine Translation with Word Importance
Abstract
Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory. In this work, we propose to address this gap by focusing on understanding the input-output behavior of NMT models. Specifically, we measure the word importance by attributing the NMT output to every input word through a gradient-based method. We validate the approach on a couple of perturbation operations, language pairs, and model architectures, demonstrating its superiority on identifying input words with higher influence on translation performance. Encouragingly, the calculated importance can serve as indicators of input words that are under-translated by NMT models. Furthermore, our analysis reveals that words of certain syntactic categories have higher importance while the categories vary across language pairs, which can inspire better design principles of NMT architectures for multi-lingual translation.
Year
DOI
Venue
2019
10.18653/v1/D19-1088
EMNLP/IJCNLP (1)
DocType
Volume
Citations 
Conference
D19-1
1
PageRank 
References 
Authors
0.36
0
6
Name
Order
Citations
PageRank
Shilin He11016.89
Zhaopeng Tu251839.95
Xing Wang35810.07
Longyue Wang47218.24
Michael R. Lyu510985529.03
Shuming Shi662058.27