Prior-Guided Data Augmentation for Infrared Small Target Detection

Ao Wang, Wei Li*, Zhanchao Huang, Xin Wu, Feiran Jie, Ran Tao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)

Abstract

Recently, a lot of deep learning (DL) methods have been proposed for infrared small target detection (ISTD). A DL-based model for the ISTD task needs large amounts of samples. However, the diversity of existing ISTD datasets is not sufficient to train a DL model with good generalization. To solve this issue, a data augmentation method called prior-guided data augmentation (PGDA) is proposed to expand the diversity of training samples indirectly without additional training data. Specifically, it decouples the target description and localization abilities by preserving the scale distribution and physical characteristics of targets. Furthermore, a multiscene infrared small target dataset (MSISTD) consisting of 1077 images with 1343 instances are constructed. The number of images and the number of instances in MSISTD are 2.4 times and 2.5 times than those of the existing largest real ISTD dataset single-frame infrared small target (SIRST) benchmark, respectively. Extensive experiments on the SIRST dataset and the constructed MSISTD dataset illustrate that the proposed PGDA improves the performance of existing DL-based ISTD methods without extra model complexity burdens. In comparison with SIRST, MSISTD has been evaluated as a more comprehensive and accurate benchmark for ISTD tasks.

Original languageEnglish
Pages (from-to)10027-10040
Number of pages14
JournalIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Volume15
DOIs
Publication statusPublished - 2022

Keywords

  • Data augmentation
  • infrared images
  • infrared small target detection
  • multiscene infrared dataset

Fingerprint

Dive into the research topics of 'Prior-Guided Data Augmentation for Infrared Small Target Detection'. Together they form a unique fingerprint.

Cite this