A Reinforced Generation of Adversarial Examples for Neural Machine Translation
Wei Zou, Shujian Huang, Jun Xie, Xinyu Dai, Jiajun Chen
Machine Translation Long Paper
Session 6B: Jul 7
(06:00-07:00 GMT)
Session 7B: Jul 7
(09:00-10:00 GMT)
Abstract:
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure.
You can open the
pre-recorded video
in a separate window.
NOTE: The SlidesLive video may display a random order of the authors.
The correct author list is shown at the top of this webpage.
Similar Papers
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples
Zhao Meng, Roger Wattenhofer,

Word-level Textual Adversarial Attacking as Combinatorial Optimization
Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun,

On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation
Chaojun Wang, Rico Sennrich,

Evaluating and Enhancing the Robustness of Neural Network-based Dependency Parsing Models with Adversarial Examples
Xiaoqing Zheng, Jiehang Zeng, Yi Zhou, Cho-Jui Hsieh, Minhao Cheng, Xuanjing Huang,
