Abstract
Recently, hyperspectral image classification has made great progress with the development of convolutional neural networks. However, due to the challenges of distribution shifts and data redundancies, the classification accuracy is low. Some existing domain adaptation methods try to mitigate the distribution shifts by training source samples and some labeled target samples. However, in practice, labeled target domain samples are difficult or even impossible to obtain. To solve the above challenges, we propose a novel dual-attention deep discriminative domain generalization framework (DAD3GM) for cross-scene hyperspectral image classification without training the labeled target samples. In DAD3GM, we mainly design two blocks: dual-attention feature learning (DAFL) and deep discriminative feature learning (DDFL). DAFL is designed to extract spatial features by multi-scale self-attention and extract spectral features by multi-head external attention. DDFL is further designed to extract deep discriminative features by contrastive regularization and class discrimination regularization. The combination of DAFL and DDFL can effectively reduce the computational time and improve the generalization performance of DAD3GM. The proposed model achieves 84.25%, 83.53%, and 80.63% overall accuracy on the public Houston, Pavia, and GID benchmarks, respectively. Compared with some classical and state-of-the-art methods, the proposed model achieves optimal results, which reveals its effectiveness and feasibility.
Original language | English |
---|---|
Article number | 5492 |
Journal | Remote Sensing |
Volume | 15 |
Issue number | 23 |
DOIs | |
Publication status | Published - Dec 2023 |
Keywords
- contrastive regularization
- distribution shifts
- domain generalization
- dual attention
- hyperspectral image