Attention and Mamba-Driven Quality Assessment for Underwater Images

Jingchao Cao, Baochao Zhang, Yutao Liu*, Runze Hu, Ke Gu, Guangtao Zhai, Junyu Dong

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Underwater imaging is essential in a variety of fields, including resource exploration, marine observation, and scientific research. However, the quality of underwater images is often compromised by environmental factors such as light scattering, absorption, and the presence of fog, leading to distortions such as color shifts, low contrast, and blurriness. To address these challenges, we propose a novel underwater image quality assessment (UIQA) method, the Attention and Mamba-driven Quality Index (AMQI). The AMQI model employs a multi-stage architecture designed to capture both local and global image features critical for underwater quality evaluation. First, a Shallow Feature Extractor (SFE) captures essential spatial details. Next, the Local Information Representation Network (LIR-Net), equipped with Channel Attention (CA) and Large Kernel-guided Spatial (LKS) mechanisms, enhances fine details and captures long-range dependencies to address underwater-specific distortions. The Global Information Representation Network (GIR-Net) further processes the features using a combination of the Visual State-Space Model (VSSM) and ResNet-50 to capture high-level semantic and contextual information. Finally, the Feature-Quality Mapping Network (FQM) converts the learned features into a quality score, ensuring precise predictions of image quality. Extensive experiments on the Underwater Image Quality Database (UIQD) demonstrate that AMQI outperforms current state-of-the-art IQA and UIQA models in terms of accuracy and correlation with human subjective evaluations. The model's robustness and generalization capabilities are further validated through detailed ablation studies and cross-database evaluations, showcasing its strong performance across diverse underwater environments.

Original languageEnglish
JournalIEEE Transactions on Multimedia
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Attention mechanism
  • Deep learning
  • Image quality assessment (IQA)
  • Underwater image
  • Visual state-space model (VSSM)

Fingerprint

Dive into the research topics of 'Attention and Mamba-Driven Quality Assessment for Underwater Images'. Together they form a unique fingerprint.

Cite this