A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples
Zhao Meng, Roger Wattenhofer
Student Research Workshop SRW Paper
Session 4B: Jul 6
(18:00-19:00 GMT)
Session 14B: Jul 8
(18:00-19:00 GMT)
Abstract:
Generating adversarial examples for natural language is hard, as natural language consists of discrete symbols and examples are often of variable lengths. In this paper, we propose a geometry-inspired attack for generating natural language adversarial examples. Our attack generates adversarial examples by iteratively approximating the decision boundary of deep neural networks. Experiments on two datasets with two different models show that our attack fools the models with high success rates, while only replacing a few words. Human evaluation shows that adversarial examples generated by our attack are hard for humans to recognize. Further experiments show that adversarial training can improve model robustness against our attack.
You can open the
pre-recorded video
in a separate window.
NOTE: The SlidesLive video may display a random order of the authors.
The correct author list is shown at the top of this webpage.