MAS-PD: Transferable Adversarial Attack against Vision-Transformers-Based SAR Image Classification Task

Boshi Zheng, Jiabin Liu*, Yunjie Li, Yan Li, Zhen Qin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Synthetic aperture radar (SAR) is widely used in civil and military fields. With advancements in vision transformer (ViT) research, these models have become increasingly important in SAR image classification due to their remarkable performance. Therefore, effectively interfering with the classification results of enemy radar systems has become a crucial factor in ensuring battlefield security. Adversarial attacks offer a potential solution, as they can significantly mislead models and cause incorrect predictions. However, recent research on adversarial examples focus on the vulnerability of Convolutional Neural Network (CNN) models, while the attack on transformer models has not been extensively studied. Considering that ViTs differ from CNNs due to its unique multi-head self-attention (MSA) mechanism and its approach of segmenting images into patches for input, this paper proposes a"MAS-PD"black-box adversarial attack method targeting these two mechanisms in ViTs. Firstly, to target the MSA mechanism, we propose the Momentum Attention Skipping (MAS) attack. By skipping the attention gradient during backpropagation and using momentum to avoid local maxima during gradient ascent, our method enhances the transferability of adversarial attacks across different models. Secondly, we apply dropout on input patches in each iteration, achieving higher attack success rates compared to using all patches. We compare our method with four traditional adversarial attack techniques across different model architectures, including CNNs and ViTs, using the publicly available MSTAR SAR dataset. The experimental results show that our method achieves an average Attack Success Rate (ASR) of 68.82% across ViTs, while other methods achieve no more than 50% ASR on average. When applied to CNNs, our method also achieves an average ASR of 67.14%, compared to less than 40% ASR for other methods. The experiment results demonstrate that our algorithm significantly enhances transferability between ViTs and from ViTs to CNNs in SAR image classification tasks.

Keywords

  • Adversarial attack
  • black-box attack
  • synthetic aperture radar (SAR)
  • vision transformers (ViTs)

Fingerprint

Dive into the research topics of 'MAS-PD: Transferable Adversarial Attack against Vision-Transformers-Based SAR Image Classification Task'. Together they form a unique fingerprint.

Cite this