TY - JOUR
T1 - Boosting the Transferability of Adversarial Attacks with Frequency-Aware Perturbation
AU - Wang, Yajie
AU - Wu, Yi
AU - Wu, Shangbo
AU - Liu, Ximeng
AU - Zhou, Wanlei
AU - Zhu, Liehuang
AU - Zhang, Chuan
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses.
AB - Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses.
KW - Adversarial attack
KW - adversarial example
KW - deep neural networks
KW - transferability
UR - http://www.scopus.com/inward/record.url?scp=85196112582&partnerID=8YFLogxK
U2 - 10.1109/TIFS.2024.3411921
DO - 10.1109/TIFS.2024.3411921
M3 - Article
AN - SCOPUS:85196112582
SN - 1556-6013
VL - 19
SP - 6293
EP - 6304
JO - IEEE Transactions on Information Forensics and Security
JF - IEEE Transactions on Information Forensics and Security
ER -