Wide-Context Attention Network for Remote Sensing Image Retrieval

Honghu Wang, Zhiqiang Zhou*, Hua Zong, Lingjuan Miao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

Remote sensing image retrieval (RSIR) has broad application prospects, but related challenges still exist. One of the most important challenges is how to obtain discriminative features. In recent years, although the powerful feature learning ability of convolutional neural networks (CNNs) has significantly improved RSIR, their performance can be restricted by the complexity of remote sensing (RS) images, such as small objects, varying scales, and wide scope. To address these problems, we propose a novel wide-context attention network (W-CAN). It leverages two attention modules to adaptively learn local features correlated in the spatial and channel dimensions, respectively, which can obtain discriminative features with extensive context information. During training, a hybrid loss is introduced to enhance the intraclass compactness and interclass separability of the features. Moreover, we add a branch to learn binary descriptors and realize the end-to-end descriptor aggregation. Experiments on four RS benchmark data sets demonstrate that the proposed method can outperform some state-of-the-art RSIR methods.

Original languageEnglish
Pages (from-to)2082-2086
Number of pages5
JournalIEEE Geoscience and Remote Sensing Letters
Volume18
Issue number12
DOIs
Publication statusPublished - 1 Dec 2021

Keywords

  • Attention network
  • convolutional neural networks (CNNs)
  • remote sensing image retrieval (RSIR)

Fingerprint

Dive into the research topics of 'Wide-Context Attention Network for Remote Sensing Image Retrieval'. Together they form a unique fingerprint.

Cite this