CSTSUNet: A Cross Swin Transformer-Based Siamese U-Shape Network for Change Detection in Remote Sensing Images

Yaping Wu, Lu Li*, Nan Wang, Wei Li, Junfang Fan, Ran Tao, Xin Wen, Yanfeng Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

Change detection (CD) in remote sensing (RS) images is a critical task that has achieved significant success by deep learning. Current networks often employ pixel-based differencing, proportion, classification-based, or feature concatenation methods to represent changes of interest. However, these methods fail to effectively detect the desired changes, as they are highly sensitive to factors such as atmospheric conditions, lighting variations, and phenological variations, resulting in detection errors. Inspired by the transformer structure, we adopt a cross-attention mechanism to more robustly extract feature differences between bitemporal images. The motivation of the method is based on the assumption that if there is no change between image pairs, the semantic features from one temporal image can well be represented by the semantic features from another temporal image. Conversely if there is a change, there are significant reconstruction errors. Therefore, a Cross Swin transformer-based Siamese U-shaped network namely CSTSUNet is proposed for RS CD. CSTSUnet consists of encoder, difference feature extraction, and decoder. The encoder is based on a hierarchical residual network (ResNet) with the Siamese U-net structure, allowing parallel processing of bitemporal images and extraction of multiscale features. The difference feature extraction consists of four difference feature extraction modules that compute difference feature at multiple scales. In this module, Cross Swin transformer is employed in each difference feature extraction module to communicate the information of bitemporal images. The decoder takes in the multiscale difference features as input, injects details and boundaries iteratively level by level, and makes the change map more and more accurate. We conduct experiments on three public datasets, and the experimental results demonstrate that the proposed CSTSUNet outperforms other state-of-the-art methods in terms of both qualitative and quantitative analyses. Our code is available at https://github.com/l7170/CSTSUNet.git.

Original languageEnglish
Article number5623715
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume61
DOIs
Publication statusPublished - 2023

Keywords

  • Change detection (CD)
  • deep learning
  • remote sensing (RS) image
  • transformer

Fingerprint

Dive into the research topics of 'CSTSUNet: A Cross Swin Transformer-Based Siamese U-Shape Network for Change Detection in Remote Sensing Images'. Together they form a unique fingerprint.

Cite this