Title
Attacking neural machine translations via hybrid attention learning.
Abstract
Deep-learning based natural language processing (NLP) models are proven vulnerable to adversarial attacks. However, there is currently insufficient research that studies attacks to neural machine translations (NMTs) and examines the robustness of deep-learning based NMTs. In this paper, we aim to fill this critical research gap. When generating word-level adversarial examples in NLP attacks, there is a conventional trade-off in existing methods between the attacking performance and the amount of perturbations. Although some literature has studied such a trade-off and successfully generated adversarial examples with a reasonable amount of perturbations, it is still challenging to generate highly successful translation attacks while concealing the changes to the texts. To this end, we propose a novel Hybrid Attentive Attack method to locate language-specific and sequence-focused words, and make semantic-aware substitutions to attack NMTs. We evaluate the effectiveness of our attack strategy by attacking three high-performing translation models. The experimental results show that our method achieves the highest attacking performance compared with other existing attacking strategies.
Year
DOI
Venue
2022
10.1007/s10994-022-06249-x
Machine Learning
Keywords
DocType
Volume
Adversarial learning,Neural machine translation,Attention models
Journal
111
Issue
ISSN
Citations 
11
0885-6125
0
PageRank 
References 
Authors
0.34
0
5
Name
Order
Citations
PageRank
Ni Mingze100.34
Ce Wang2239.20
Tianqing Zhu315927.73
Shui Yu42365208.84
Wei Liu546837.36