EarthMarker: A Visual Prompting Multimodal Large Language Model for Remote Sensing

Wei Zhang, Miaoxin Cai, Tong Zhang, Yin Zhuang*, Jun Li, Xuerui Mao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in prompt learning have allowed users to interact with artificial intelligence (AI) tools in multiturn dialog, enabling an interactive understanding of images. However, it is difficult and inefficient to deliver information in complicated remote sensing (RS) scenarios using plain language instructions alone, which would severely hinder deep comprehension of the latent content in imagery. Besides, existing prompting strategies in natural scenes are hard to apply to interpret the RS data due to significant domain differences. To address these challenges, the first visual prompting-based multimodal large language model (MLLM) named EarthMarker is proposed in the RS domain. EarthMarker is capable of interpreting RS imagery at the image, region, and point levels by levering visual prompts (i.e., boxes and points). Specifically, a shared visual encoding method is developed to establish the spatial pattern interpretation relationships between the multiscale representations of input images and various visual prompts. Subsequently, the mixed visual-spatial representations are associated with language instructions to construct joint prompts, enabling the interpretation of intricate content of RS imagery. Furthermore, to bridge the domain gap between natural and RS data, and effectively transfer domain-level knowledge from natural scenes to the RS domain, a cross-domain learning strategy is developed to facilitate the RS imagery understanding. In addition, to tackle the lack of RS visual prompting data, a dataset named RSVP featuring multimodal multigranularity visual prompts instruction-following is constructed. Extensive experiments are conducted to demonstrate the competitive performance of the EarthMarker. The proposed EarthMarker represents a significant advance in multigranularity RS imagery interpretation under the visual prompting learning framework. Our code and dataset are available at https://github.com/wivizhang/EarthMarker.

Original languageEnglish
Article number5604219
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume63
DOIs
Publication statusPublished - 2025

Keywords

  • Multimodal large language models (MLLMs)
  • remote sensing (RS)
  • visual prompting

Fingerprint

Dive into the research topics of 'EarthMarker: A Visual Prompting Multimodal Large Language Model for Remote Sensing'. Together they form a unique fingerprint.

Cite this

Zhang, W., Cai, M., Zhang, T., Zhuang, Y., Li, J., & Mao, X. (2025). EarthMarker: A Visual Prompting Multimodal Large Language Model for Remote Sensing. IEEE Transactions on Geoscience and Remote Sensing, 63, Article 5604219. https://doi.org/10.1109/TGRS.2024.3523505