Multimodal air-quality prediction: A multimodal feature fusion network based on shared-specific modal feature decoupling

Xiaoxia Chen*, Zhen Wang, Fangyan Dong, Kaoru Hirota

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Severe air pollution degrades air quality and threatens human health, necessitating accurate prediction for pollution control. While spatiotemporal networks integrating sequence models and graph structures dominate current methods, prior work neglects multimodal data fusion to enhance feature representation. This study addresses the spatial limitations of single-perspective ground monitoring by synergizing remote sensing data, which provides global air quality distribution, with ground observations. We propose a Shared-Specific Modality Decoupling-based Spatiotemporal Multimodal Fusion Network for air-quality prediction, comprising: (1) feature extractors for remote sensing images and ground monitoring data, (2) a decoupling module separating shared and modality-specific features, and (3) a hierarchical attention-graph convolution fusion module. This framework achieves effective multimodal fusion by disentangling cross-modal dependencies while preserving unique characteristics. Evaluations on two real-world datasets demonstrate superior performance over baseline models, validating the efficacy of multimodal integration for spatial–temporal air quality forecasting.

Original languageEnglish
Article number106553
JournalEnvironmental Modelling and Software
Volume192
DOIs
Publication statusPublished - Aug 2025
Externally publishedYes

Keywords

  • Air-quality prediction
  • Multimodal fusion
  • Spatial–temporal network
  • Time series forecasting

Fingerprint

Dive into the research topics of 'Multimodal air-quality prediction: A multimodal feature fusion network based on shared-specific modal feature decoupling'. Together they form a unique fingerprint.

Cite this