Mixed-noise band selection for hyperspectral images

Zhen Li, Chenwei Deng*, Yun Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Hyperspectral images (HSIs) with abundant spectral information are generally susceptible to various types of noise, such as Gaussian noise and stripe noise. Recently, a few quality-based selection algorithms have been proposed to remove noise bands from HSIs. However, these methods suffer from an inability to discriminate the mixed-noise bands of HSIs and are sensitive to image content variation and luminance changes. Here, we develop a mixed-noise band selection framework that can separate the Gaussian and stripe noise bands from HSIs effectively. We first improve tensor decomposition to reconstruct the mixed-noise components and low-rank components, which reduces the influence of image content and luminance changes. Spectral smoothness constraints and unidirectional total variation are incorporated into the decomposition model to enhance the separation performance for Gaussian and stripe noise. Then, different statistical features, including Weibull and histogram of oriented gradient (HOG) features, are applied to extract the robust parameters from mixed-noise components. More importantly, an extreme learning machine (ELM) is trained to predict the noise bands. The ELM has an extremely fast learning speed and tends to achieve better performance than other networks. Finally, by aggregating all these strategies, our methods can select the mixed-noise bands efficiently. The experimental results on both synthetic and real noise HSIs indicate that the proposed method outperforms the state-of-the-art methods.

Original languageEnglish
Pages (from-to)173815-173825
Number of pages11
JournalIEEE Access
Volume8
DOIs
Publication statusPublished - 2020

Keywords

  • Hyperspectral images
  • Mixed-noise band selection
  • Tensor decomposition

Fingerprint

Dive into the research topics of 'Mixed-noise band selection for hyperspectral images'. Together they form a unique fingerprint.

Cite this