Bidirectional grid fusion network for accurate land cover classification of high-resolution remote sensing images

Yupei Wang*, Hao Shi, Yin Zhuang, Qianbo Sang, Liang Chen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)

Abstract

Land cover classification has achieved significant advances by employing deep convolutional network (ConvNet) based methods. Following the paradigm of learning deep models, land cover classification is modeled as semantic segmentation of very high resolution remote sensing images. In order to obtain accurate segmentation results, high-level categorical semantics and low-level spatial details should be effectively fused. To this end, we propose a novel bidirectional gird fusion network to aggregate the multilevel features across the ConvNet. Specifically, the proposed model is characterized by a bidirectional fusion architecture, which enriches diversity of feature interaction by encouraging bidirectional information flow. In this way, our model gains mutual benefits between top-down and bottom-up information flows. Moreover, a grid fusion architecture is then followed for further feature refinement in a dense and hierarchical fusion manner. Finally, effective feature upsampling is also critical for the multiple fusion operations. Consequently, a content-Aware feature upsampling kernel is incorporated for further improvement. Our whole model consistently achieves significant improvement over state-of-The-Art methods on two major datasets, ISPRS and GID.

Original languageEnglish
Article number9195158
Pages (from-to)5508-5517
Number of pages10
JournalIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Volume13
DOIs
Publication statusPublished - 2020

Keywords

  • Deep learning
  • land cover classification
  • semantic segmentation

Fingerprint

Dive into the research topics of 'Bidirectional grid fusion network for accurate land cover classification of high-resolution remote sensing images'. Together they form a unique fingerprint.

Cite this