Abstract
Deep learning models are vulnerable to adversarial attacks. Transfer-based adversarial examples are crafted against surrogate models and transferred to victim models. However, under the black-box settings, most adversaries have poor transferability on models with different input sizes. In this work, we propose the Scale Enriching Method (SEM) to enhance the transferability of adversarial examples using an input scale-enriching framework. By scaling the surrogate model's input in a specific range, our method enriches the attention areas that the surrogate model perceives and enlarges the tolerance of the distinction among different models, significantly improving the transferability. Notably, SEM avoids introducing extraneous noise during perturbation generation, thereby preserving the inherent textural features corresponding to different scales within the original images. Experiments on ImageNet show that our method successfully mitigates the gap of transferability between models with different input sizes. Furthermore, we demonstrate that our method can integrate with existing methods and bypass a variety of defense methods with over 90% success rate.
| Original language | English |
|---|---|
| Article number | 107549 |
| Journal | Neural Networks |
| Volume | 189 |
| DOIs | |
| Publication status | Published - Sept 2025 |
| Externally published | Yes |
Keywords
- Adversarial attack
- Adversarial example
- Transfer attack
Fingerprint
Dive into the research topics of 'Enhancing the transferability of adversarial attacks via Scale Enriching'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver