Multi-task class-aware adversarial training for remote sensing object detection robustness

  • Zhaohui Ci
  • , Zhiguo Liu
  • , Yufei Song
  • , Fan Qin
  • , Yuanzhang Li
  • , Jingyi Zhao*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

With the increasing application of deep learning in remote sensing image object detection, model robustness and security under adversarial attacks have become major concerns. Adversarial attacks, by introducing imperceptible perturbations, mislead object detection systems, which severely impairs applications in video surveillance, military reconnaissance, etc. To tackle the issues of multi-task optimization conflicts and robustness degradation in adversarial scenarios, we propose a novel multi-task and class-aware adversarial training framework. Our approach simultaneously addresses classification, bounding box regression, and confidence prediction. By introducing a multi-task maximization loss strategy, we generate adversarial examples that effectively challenge the model. Additionally, a class-aware loss mechanism is employed to balance robustness across various object categories. Experimental evaluations on PASCAL VOC and DIOR datasets show that our method significantly boosts resistance against both white-box and black-box attacks. Under PGD attack conditions, it achieves substantial improvements in mean Average Precision (mAP) while maintaining high accuracy on clean data. These results confirm the effectiveness of our method in enhancing the adversarial robustness of remote sensing object detection models.

Original languageEnglish
Article number2581373
JournalConnection Science
Volume37
Issue number1
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Adversarial attacks
  • adversarial training
  • multi-task learning
  • object detection
  • remote sensing image

Fingerprint

Dive into the research topics of 'Multi-task class-aware adversarial training for remote sensing object detection robustness'. Together they form a unique fingerprint.

Cite this