Boosting targeted black-box attacks via ensemble substitute training and linear augmentation

Xianfeng Gao, Yu An Tan, Hongwei Jiang, Quanxin Zhang, Xiaohui Kuang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

26 引用 (Scopus)

摘要

These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations' adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries' access to target models. In order to overcome the problem of black-box attackers' unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.

源语言英语
文章编号2286
期刊Applied Sciences (Switzerland)
9
11
DOI
出版状态已出版 - 1 6月 2019

指纹

探究 'Boosting targeted black-box attacks via ensemble substitute training and linear augmentation' 的科研主题。它们共同构成独一无二的指纹。

引用此