Boosting cross-task adversarial attack with random blur

Yaoyuan Zhang, Yu an Tan, Mingfeng Lu, Tian Chen, Yuanzhang Li, Quanxin Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black-box attacks. However, most existing adversarial attack methods rarely consider cross-task black-box attacks that are more similar to real-world scenarios. In this paper, we propose a class of random blur-based iterative methods (RBMs) to enhance the success rates of cross-task black-box attacks. By integrating the random erasing and Gaussian blur into the iterative gradient-based attacks, the proposed RBM augments the diversity of adversarial perturbation and alleviates the marginal effect caused by iterative gradient-based methods, generating the adversarial examples of stronger transferability. Experimental results on ImageNet and PASCAL VOC data sets show that the proposed RBM generates more transferable adversarial examples on image classification models, thereby successfully attacking cross-task black-box object detection models.

Original languageEnglish
Pages (from-to)8139-8154
Number of pages16
JournalInternational Journal of Intelligent Systems
Volume37
Issue number10
DOIs
Publication statusPublished - Oct 2022

Keywords

  • adversarial examples
  • deep neural networks
  • image classification
  • object detection
  • transferability

Fingerprint

Dive into the research topics of 'Boosting cross-task adversarial attack with random blur'. Together they form a unique fingerprint.

Cite this