Title
Effective Graph Context Representation for Document-level Machine Translation.
Abstract
Document-level neural machine translation (DocNMT) universally encodes several local sentences or the entire document. Thus, DocNMT does not consider the relevance of document-level contextual information, for example, some context (i.e., content words, logical order, and co-occurrence relation) is more effective than another auxiliary context (i.e., functional and auxiliary words). To address this issue, we first utilize the word frequency information to recognize content words in the input document, and then use heuristical relations to summarize content words and sentences as a graph structure without relying on external syntactic knowledge. Furthermore, we apply graph attention networks to this graph structure to learn its feature representation, which allows DocNMT to more effectively capture the document-level context. Experimental results on several widely-used document-level benchmarks demonstrated the effectiveness of the proposed approach.
Year
DOI
Venue
2022
10.24963/ijcai.2022/566
European Conference on Artificial Intelligence
Keywords
DocType
Citations 
Natural Language Processing: Machine Translation and Multilinguality,Natural Language Processing: Language Generation
Conference
0
PageRank 
References 
Authors
0.34
0
6
Name
Order
Citations
PageRank
Kehai Chen14316.34
Yang Muyun211229.50
Masao Utiyama371486.69
Eiichiro SUMITA41466190.87
Rui Wang500.34
Min Zhang600.34