Abstract
With the increasing application of deep learning in remote sensing image object detection, model robustness and security under adversarial attacks have become major concerns. Adversarial attacks, by introducing imperceptible perturbations, mislead object detection systems, which severely impairs applications in video surveillance, military reconnaissance, etc. To tackle the issues of multi-task optimization conflicts and robustness degradation in adversarial scenarios, we propose a novel multi-task and class-aware adversarial training framework. Our approach simultaneously addresses classification, bounding box regression, and confidence prediction. By introducing a multi-task maximization loss strategy, we generate adversarial examples that effectively challenge the model. Additionally, a class-aware loss mechanism is employed to balance robustness across various object categories. Experimental evaluations on PASCAL VOC and DIOR datasets show that our method significantly boosts resistance against both white-box and black-box attacks. Under PGD attack conditions, it achieves substantial improvements in mean Average Precision (mAP) while maintaining high accuracy on clean data. These results confirm the effectiveness of our method in enhancing the adversarial robustness of remote sensing object detection models.
| Original language | English |
|---|---|
| Article number | 2581373 |
| Journal | Connection Science |
| Volume | 37 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 2025 |
| Externally published | Yes |
Keywords
- Adversarial attacks
- adversarial training
- multi-task learning
- object detection
- remote sensing image