TY - GEN
T1 - Research on Transferable Characteristics of Adversarial Examples Generated Based on Gradient Information
AU - Li, Yang
AU - Zhang, Pengfei
AU - Li, Qiaoyi
AU - Wang, Zhengjie
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
PY - 2022
Y1 - 2022
N2 - Deep neural networks are easily disturbed by adversarial examples, which bring many security problems to the application of artificial intelligence technology. Adversarial examples and adversarial attacks can be useful tools for evaluating the robustness of deep learning models before they are deployed. However, most of the adversarial examples generated by a single network model have weak transferability, which can only have a good attack effect on the network model, and it is difficult to successfully attack other network models. In response to this problem, this paper studies the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and their transferability on the ImageNet dataset. Meantime, the paper proposes a pixel-level image fusion method to enhance the transferability. The adversarial examples generated by the method can be better applied to attack a variety of neural network models. These adversarial examples with strong transferability can be used as a benchmark to measure the robustness of DNN models and defense methods.
AB - Deep neural networks are easily disturbed by adversarial examples, which bring many security problems to the application of artificial intelligence technology. Adversarial examples and adversarial attacks can be useful tools for evaluating the robustness of deep learning models before they are deployed. However, most of the adversarial examples generated by a single network model have weak transferability, which can only have a good attack effect on the network model, and it is difficult to successfully attack other network models. In response to this problem, this paper studies the adversarial examples generated by the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) and their transferability on the ImageNet dataset. Meantime, the paper proposes a pixel-level image fusion method to enhance the transferability. The adversarial examples generated by the method can be better applied to attack a variety of neural network models. These adversarial examples with strong transferability can be used as a benchmark to measure the robustness of DNN models and defense methods.
KW - Adversarial attacks
KW - Adversarial examples
KW - Deep neural network models
KW - Pixel-level image fusion
KW - Transferability
UR - http://www.scopus.com/inward/record.url?scp=85140433463&partnerID=8YFLogxK
U2 - 10.1007/978-981-19-6203-5_39
DO - 10.1007/978-981-19-6203-5_39
M3 - Conference contribution
AN - SCOPUS:85140433463
SN - 9789811962028
T3 - Lecture Notes in Electrical Engineering
SP - 405
EP - 415
BT - Proceedings of 2022 Chinese Intelligent Systems Conference - Volume I
A2 - Jia, Yingmin
A2 - Zhang, Weicun
A2 - Fu, Yongling
A2 - Zhao, Shoujun
PB - Springer Science and Business Media Deutschland GmbH
T2 - 18th Chinese Intelligent Systems Conference, CISC 2022
Y2 - 15 October 2022 through 16 October 2022
ER -