Research on Transferable Characteristics of Adversarial Examples Generated Based on Gradient Information

Yang Li, Pengfei Zhang, Qiaoyi Li, Zhengjie Wang*

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

Deep neural networks are easily disturbed by adversarial examples, which bring many security problems to the application of artificial intelligence technology. Adversarial examples and adversarial attacks can be useful tools for evaluating the robustness of deep learning models before they are deployed. However, most of the adversarial examples generated by a single network model have weak transferability, which can only have a good attack effect on the network model, and it is difficult to successfully attack other network models. In response to this problem, this paper studies the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and their transferability on the ImageNet dataset. Meantime, the paper proposes a pixel-level image fusion method to enhance the transferability. The adversarial examples generated by the method can be better applied to attack a variety of neural network models. These adversarial examples with strong transferability can be used as a benchmark to measure the robustness of DNN models and defense methods.

源语言英语
主期刊名Proceedings of 2022 Chinese Intelligent Systems Conference - Volume I
编辑Yingmin Jia, Weicun Zhang, Yongling Fu, Shoujun Zhao
出版商Springer Science and Business Media Deutschland GmbH
405-415
页数11
ISBN(印刷版)9789811962028
DOI
出版状态已出版 - 2022
活动18th Chinese Intelligent Systems Conference, CISC 2022 - Beijing, 中国
期限: 15 10月 202216 10月 2022

出版系列

姓名Lecture Notes in Electrical Engineering
950 LNEE
ISSN(印刷版)1876-1100
ISSN(电子版)1876-1119

会议

会议18th Chinese Intelligent Systems Conference, CISC 2022
国家/地区中国
Beijing
时期15/10/2216/10/22

指纹

探究 'Research on Transferable Characteristics of Adversarial Examples Generated Based on Gradient Information' 的科研主题。它们共同构成独一无二的指纹。

引用此