Abstract
Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black-box attacks. However, most existing adversarial attack methods rarely consider cross-task black-box attacks that are more similar to real-world scenarios. In this paper, we propose a class of random blur-based iterative methods (RBMs) to enhance the success rates of cross-task black-box attacks. By integrating the random erasing and Gaussian blur into the iterative gradient-based attacks, the proposed RBM augments the diversity of adversarial perturbation and alleviates the marginal effect caused by iterative gradient-based methods, generating the adversarial examples of stronger transferability. Experimental results on ImageNet and PASCAL VOC data sets show that the proposed RBM generates more transferable adversarial examples on image classification models, thereby successfully attacking cross-task black-box object detection models.
Original language | English |
---|---|
Pages (from-to) | 8139-8154 |
Number of pages | 16 |
Journal | International Journal of Intelligent Systems |
Volume | 37 |
Issue number | 10 |
DOIs | |
Publication status | Published - Oct 2022 |
Keywords
- adversarial examples
- deep neural networks
- image classification
- object detection
- transferability