TY - JOUR
T1 - EarthMarker
T2 - A Visual Prompting Multimodal Large Language Model for Remote Sensing
AU - Zhang, Wei
AU - Cai, Miaoxin
AU - Zhang, Tong
AU - Zhuang, Yin
AU - Li, Jun
AU - Mao, Xuerui
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2025
Y1 - 2025
N2 - Recent advances in prompt learning have allowed users to interact with artificial intelligence (AI) tools in multiturn dialog, enabling an interactive understanding of images. However, it is difficult and inefficient to deliver information in complicated remote sensing (RS) scenarios using plain language instructions alone, which would severely hinder deep comprehension of the latent content in imagery. Besides, existing prompting strategies in natural scenes are hard to apply to interpret the RS data due to significant domain differences. To address these challenges, the first visual prompting-based multimodal large language model (MLLM) named EarthMarker is proposed in the RS domain. EarthMarker is capable of interpreting RS imagery at the image, region, and point levels by levering visual prompts (i.e., boxes and points). Specifically, a shared visual encoding method is developed to establish the spatial pattern interpretation relationships between the multiscale representations of input images and various visual prompts. Subsequently, the mixed visual-spatial representations are associated with language instructions to construct joint prompts, enabling the interpretation of intricate content of RS imagery. Furthermore, to bridge the domain gap between natural and RS data, and effectively transfer domain-level knowledge from natural scenes to the RS domain, a cross-domain learning strategy is developed to facilitate the RS imagery understanding. In addition, to tackle the lack of RS visual prompting data, a dataset named RSVP featuring multimodal multigranularity visual prompts instruction-following is constructed. Extensive experiments are conducted to demonstrate the competitive performance of the EarthMarker. The proposed EarthMarker represents a significant advance in multigranularity RS imagery interpretation under the visual prompting learning framework. Our code and dataset are available at https://github.com/wivizhang/EarthMarker.
AB - Recent advances in prompt learning have allowed users to interact with artificial intelligence (AI) tools in multiturn dialog, enabling an interactive understanding of images. However, it is difficult and inefficient to deliver information in complicated remote sensing (RS) scenarios using plain language instructions alone, which would severely hinder deep comprehension of the latent content in imagery. Besides, existing prompting strategies in natural scenes are hard to apply to interpret the RS data due to significant domain differences. To address these challenges, the first visual prompting-based multimodal large language model (MLLM) named EarthMarker is proposed in the RS domain. EarthMarker is capable of interpreting RS imagery at the image, region, and point levels by levering visual prompts (i.e., boxes and points). Specifically, a shared visual encoding method is developed to establish the spatial pattern interpretation relationships between the multiscale representations of input images and various visual prompts. Subsequently, the mixed visual-spatial representations are associated with language instructions to construct joint prompts, enabling the interpretation of intricate content of RS imagery. Furthermore, to bridge the domain gap between natural and RS data, and effectively transfer domain-level knowledge from natural scenes to the RS domain, a cross-domain learning strategy is developed to facilitate the RS imagery understanding. In addition, to tackle the lack of RS visual prompting data, a dataset named RSVP featuring multimodal multigranularity visual prompts instruction-following is constructed. Extensive experiments are conducted to demonstrate the competitive performance of the EarthMarker. The proposed EarthMarker represents a significant advance in multigranularity RS imagery interpretation under the visual prompting learning framework. Our code and dataset are available at https://github.com/wivizhang/EarthMarker.
KW - Multimodal large language models (MLLMs)
KW - remote sensing (RS)
KW - visual prompting
UR - http://www.scopus.com/inward/record.url?scp=85214304020&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2024.3523505
DO - 10.1109/TGRS.2024.3523505
M3 - Article
AN - SCOPUS:85214304020
SN - 0196-2892
VL - 63
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
M1 - 5604219
ER -