A VMamba-based Spatial-Spectral Fusion Network for Remote Sensing Image Classification

Lan Luo, Yanmei Zhang*, Yanbing Xu, Tingxuan Yue, Yuxi Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In hyperspectral (HS) and light detection and ranging (LiDAR) collaborative classification, HS provides rich spectral information, while LiDAR offers unique elevation data. However, existing methods often focus on feature extraction within individual modalities before fusion, which may bring about insufficient fusion due to a lack of inter-modal complementarity and interaction. To address this, we propose a framework for HS and LiDAR fusion classification based on the VMamba model, called SSFN, which includes a dual supplement network (DSN) and a VMamba-based integration network (VMIN), modeling long-range dependencies and fully leveraging the correlation and complementarity of heterogeneous information. The DSN, comprising a spatial supplement network (Spa-SN) and a spectral supplement network (Spe-SN), is devised to supplement missing features for each modality. The Spa-SN complements the spatial features of HS by capturing spatial correlations between LiDAR and HS, and the Spe-SN employs spectral information from HS to compensate for the spectral features missing in LiDAR. Thus, both HS and LiDAR have a comprehensive spatial-spectral description. The VMIN is then utilized for augmentation and interaction of supplemented features, and discriminative features are adaptively selected for classification. Extensive experiments on three benchmark datasets demonstrate that our method outperforms multiple state-of-the-art methods and needs the fewest parameters.

Keywords

  • Fusion classification
  • hyperspectral (HS)
  • light detection and ranging (LiDAR)
  • spatial-spectral supplement
  • VMamba

Fingerprint

Dive into the research topics of 'A VMamba-based Spatial-Spectral Fusion Network for Remote Sensing Image Classification'. Together they form a unique fingerprint.

Cite this