Abstract
Severe air pollution degrades air quality and threatens human health, necessitating accurate prediction for pollution control. While spatiotemporal networks integrating sequence models and graph structures dominate current methods, prior work neglects multimodal data fusion to enhance feature representation. This study addresses the spatial limitations of single-perspective ground monitoring by synergizing remote sensing data, which provides global air quality distribution, with ground observations. We propose a Shared-Specific Modality Decoupling-based Spatiotemporal Multimodal Fusion Network for air-quality prediction, comprising: (1) feature extractors for remote sensing images and ground monitoring data, (2) a decoupling module separating shared and modality-specific features, and (3) a hierarchical attention-graph convolution fusion module. This framework achieves effective multimodal fusion by disentangling cross-modal dependencies while preserving unique characteristics. Evaluations on two real-world datasets demonstrate superior performance over baseline models, validating the efficacy of multimodal integration for spatial–temporal air quality forecasting.
| Original language | English |
|---|---|
| Article number | 106553 |
| Journal | Environmental Modelling and Software |
| Volume | 192 |
| DOIs | |
| Publication status | Published - Aug 2025 |
| Externally published | Yes |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Air-quality prediction
- Multimodal fusion
- Spatial–temporal network
- Time series forecasting
Fingerprint
Dive into the research topics of 'Multimodal air-quality prediction: A multimodal feature fusion network based on shared-specific modal feature decoupling'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver