Abstract
Band selection is an effective method to reduce redundancy in a hyperspectral image (HSI) without compromising the original contents. Popular band selection methods usually use strong assumptions, such as linear or nonlinear assumptions with simple predefined Kernel functions, to model the correlations between bands. However, this kind of strong assumption may not valid in the real environment due to the complex interactions between bands. In this letter, we treat hyperspectral band selection as a spectral reconstruction task. By assuming that an HSI can be sparsely reconstructed from a few informative bands, we propose an attention-based autoencoder to model the underlying nonlinear interdependencies between bands. The proposed model consists of two parts: an attention module and an autoencoder. The attention module is used to produce the attention mask which selects the most informative bands for every pixel. The autoencoder uses these informative bands to reconstruct the raw HSI. The final band selection is conducted via clustering column vectors of the attention mask and exploring the most representative band for each cluster. Different from most of the existing band selection methods, the proposed method directly learns global nonlinear correlations between bands without strong assumptions. The proposed model is easy to implement and all the parameters can be jointly optimized using the stochastic gradient descend algorithm. Experiments on three open public data sets show that the proposed method offers the promising results.
| Original language | English |
|---|---|
| Article number | 9032380 |
| Pages (from-to) | 147-151 |
| Number of pages | 5 |
| Journal | IEEE Geoscience and Remote Sensing Letters |
| Volume | 18 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - Jan 2021 |
| Externally published | Yes |
Keywords
- Attention
- band selection
- hyperspectral images (HSIs)
- neural network
Fingerprint
Dive into the research topics of 'Band Selection of Hyperspectral Images Using Attention-Based Autoencoders'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver