Boosting the Transferability of Adversarial Attacks with Frequency-Aware Perturbation

Yajie Wang, Yi Wu, Shangbo Wu, Ximeng Liu, Wanlei Zhou, Liehuang Zhu, Chuan Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses.

Original languageEnglish
Pages (from-to)6293-6304
Number of pages12
JournalIEEE Transactions on Information Forensics and Security
Volume19
DOIs
Publication statusPublished - 2024

Keywords

  • Adversarial attack
  • adversarial example
  • deep neural networks
  • transferability

Fingerprint

Dive into the research topics of 'Boosting the Transferability of Adversarial Attacks with Frequency-Aware Perturbation'. Together they form a unique fingerprint.

Cite this