Language Interprets Vision: Adaptive Encoding and Decoding for Referring Image Segmentation

  • A. Qi
  • , Sanyuan Zhao*
  • , Xingping Dong
  • , Jianbing Shen*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Referring image segmentation aims to segment the referent with natural linguistic expressions. Due to the distinct modality properties of the image and language, it is challenging to effectively align token embeddings with visual regions. Different from existing methods of coordinate linguistics for the specific visual region, we propose a novel referring image segmentation paradigm, language interprets vision (LIV), which densely fine-grained aligns the visual and linguistic modalities, and fuse the multi-modal biases effectively. LIV resorts to re-encoding visual features on compositional dimensions of <Height, Width, Channel>, which interprets vision through linguistic expression and makes cross-modality alignment denser. More specifically, we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure. In addition, we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized, which boosts the precision of mask prediction. Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin.

Original languageEnglish
Pages (from-to)189-202
Number of pages14
JournalComputational Visual Media
Volume12
Issue number1
DOIs
Publication statusPublished - 2026

Keywords

  • attention
  • cross modal
  • referring image segmentation (RIS)
  • segmentation
  • Transformer

Fingerprint

Dive into the research topics of 'Language Interprets Vision: Adaptive Encoding and Decoding for Referring Image Segmentation'. Together they form a unique fingerprint.

Cite this