Research on Transferable Characteristics of Adversarial Examples Generated Based on Gradient Information

Yang Li, Pengfei Zhang, Qiaoyi Li, Zhengjie Wang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Deep neural networks are easily disturbed by adversarial examples, which bring many security problems to the application of artificial intelligence technology. Adversarial examples and adversarial attacks can be useful tools for evaluating the robustness of deep learning models before they are deployed. However, most of the adversarial examples generated by a single network model have weak transferability, which can only have a good attack effect on the network model, and it is difficult to successfully attack other network models. In response to this problem, this paper studies the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and their transferability on the ImageNet dataset. Meantime, the paper proposes a pixel-level image fusion method to enhance the transferability. The adversarial examples generated by the method can be better applied to attack a variety of neural network models. These adversarial examples with strong transferability can be used as a benchmark to measure the robustness of DNN models and defense methods.

Original languageEnglish
Title of host publicationProceedings of 2022 Chinese Intelligent Systems Conference - Volume I
EditorsYingmin Jia, Weicun Zhang, Yongling Fu, Shoujun Zhao
PublisherSpringer Science and Business Media Deutschland GmbH
Pages405-415
Number of pages11
ISBN (Print)9789811962028
DOIs
Publication statusPublished - 2022
Event18th Chinese Intelligent Systems Conference, CISC 2022 - Beijing, China
Duration: 15 Oct 202216 Oct 2022

Publication series

NameLecture Notes in Electrical Engineering
Volume950 LNEE
ISSN (Print)1876-1100
ISSN (Electronic)1876-1119

Conference

Conference18th Chinese Intelligent Systems Conference, CISC 2022
Country/TerritoryChina
CityBeijing
Period15/10/2216/10/22

Keywords

  • Adversarial attacks
  • Adversarial examples
  • Deep neural network models
  • Pixel-level image fusion
  • Transferability

Fingerprint

Dive into the research topics of 'Research on Transferable Characteristics of Adversarial Examples Generated Based on Gradient Information'. Together they form a unique fingerprint.

Cite this