Enhancing the transferability of adversarial attacks via Scale Enriching

  • Yuhang Zhao
  • , Jun Zheng
  • , Xianfeng Gao
  • , Lu Liu
  • , Yaoyuan Zhang
  • , Quanxin Zhang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning models are vulnerable to adversarial attacks. Transfer-based adversarial examples are crafted against surrogate models and transferred to victim models. However, under the black-box settings, most adversaries have poor transferability on models with different input sizes. In this work, we propose the Scale Enriching Method (SEM) to enhance the transferability of adversarial examples using an input scale-enriching framework. By scaling the surrogate model's input in a specific range, our method enriches the attention areas that the surrogate model perceives and enlarges the tolerance of the distinction among different models, significantly improving the transferability. Notably, SEM avoids introducing extraneous noise during perturbation generation, thereby preserving the inherent textural features corresponding to different scales within the original images. Experiments on ImageNet show that our method successfully mitigates the gap of transferability between models with different input sizes. Furthermore, we demonstrate that our method can integrate with existing methods and bypass a variety of defense methods with over 90% success rate.

Original languageEnglish
Article number107549
JournalNeural Networks
Volume189
DOIs
Publication statusPublished - Sept 2025
Externally publishedYes

Keywords

  • Adversarial attack
  • Adversarial example
  • Transfer attack

Fingerprint

Dive into the research topics of 'Enhancing the transferability of adversarial attacks via Scale Enriching'. Together they form a unique fingerprint.

Cite this