Boosting cross-task adversarial attack with random blur

Yaoyuan Zhang, Yu an Tan, Mingfeng Lu, Tian Chen, Yuanzhang Li, Quanxin Zhang*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

5 引用 (Scopus)

摘要

Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black-box attacks. However, most existing adversarial attack methods rarely consider cross-task black-box attacks that are more similar to real-world scenarios. In this paper, we propose a class of random blur-based iterative methods (RBMs) to enhance the success rates of cross-task black-box attacks. By integrating the random erasing and Gaussian blur into the iterative gradient-based attacks, the proposed RBM augments the diversity of adversarial perturbation and alleviates the marginal effect caused by iterative gradient-based methods, generating the adversarial examples of stronger transferability. Experimental results on ImageNet and PASCAL VOC data sets show that the proposed RBM generates more transferable adversarial examples on image classification models, thereby successfully attacking cross-task black-box object detection models.

源语言英语
页(从-至)8139-8154
页数16
期刊International Journal of Intelligent Systems
37
10
DOI
出版状态已出版 - 10月 2022

指纹

探究 'Boosting cross-task adversarial attack with random blur' 的科研主题。它们共同构成独一无二的指纹。

引用此