Abstract
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturbations' adversarial attacks, which contain two types, black-box and white-box attacks, according to the adversaries' access to target models. In order to overcome the problem of black-box attackers' unreachabilities to the internals of target DNN, many researchers put forward a series of strategies. Previous works include a method of training a local substitute model for the target black-box model via Jacobian-based augmentation and then use the substitute model to craft adversarial examples using white-box methods. In this work, we improve the dataset augmentation to make the substitute models better fit the decision boundary of the target model. Unlike the previous work that just performed the non-targeted attack, we make it first to generate targeted adversarial examples via training substitute models. Moreover, to boost the targeted attacks, we apply the idea of ensemble attacks to the substitute training. Experiments on MNIST and GTSRB, two common datasets for image classification, demonstrate our effectiveness and efficiency of boosting a targeted black-box attack, and we finally attack the MNIST and GTSRB classifiers with the success rates of 97.7% and 92.8%.
Original language | English |
---|---|
Article number | 2286 |
Journal | Applied Sciences (Switzerland) |
Volume | 9 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Jun 2019 |
Keywords
- Adversarial attack
- Black-box attack
- Dataset augmentation
- Deep learning
- Substitute training