LHAS: A Lightweight Network Based on Hierarchical Attention for Hyperspectral Image Segmentation

Lujie Song, Yunhao Gao*, Yuanyuan Gui, Daguang Jiang, Mengmeng Zhang, Huan Liu, Wei Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning has garnered extensive attention in hyperspectral image (HSI) processing. However, its application in HSI semantic segmentation tasks has been relatively limited. Although segmentation methods can often interpret images up to two orders of magnitude faster than classification methods when interpreting images of the same scene, the segmentation task requires the training data to be fully labeled, i.e., each pixel has a corresponding label. Such data is scarce in HSI data. To address this problem, this paper proposes a lightweight segmentation network based on a hierarchical attention segmentation network (LHAS), in which a generalized data augmentation method (GDA) is utilized to acquire relatively sufficient data for semantic segmentation. Specifically, the hierarchical attention module is designed to extract global and local information on HSI patches from different layers. A prototype auxiliary module (PAM) of cluster contrast has also been developed to enhance feature discrimination. Across two different datasets in various scenarios, the proposed LHAS demonstrates superior segmentation performance compared to existing methods, affirming its effectiveness. Additionally, experiments conducted on embedded devices validate the efficacy of LHAS.

Original languageEnglish
JournalIEEE Transactions on Geoscience and Remote Sensing
DOIs
Publication statusAccepted/In press - 2025

Keywords

  • Generalized data augmentation
  • Global and local feature extraction
  • Hyperspectral image semantic segmentation
  • Lightweight network

Fingerprint

Dive into the research topics of 'LHAS: A Lightweight Network Based on Hierarchical Attention for Hyperspectral Image Segmentation'. Together they form a unique fingerprint.

Cite this