Title
An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing.
Abstract
In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural end-to-end models that combine both inputs $mt$ (raw MT output) and $src$ (source language input) in a single neural architecture, modeling ${mt, src} rightarrow pe$ directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input.
Year
Venue
DocType
2017
international joint conference on natural language processing
Conference
Volume
Citations 
PageRank 
abs/1706.04138
3
0.41
References 
Authors
9
2
Name
Order
Citations
PageRank
Marcin Junczys-Dowmunt131224.24
Roman Grundkiewicz210911.75