LCCo: Lending CLIP to co-segmentation

Xin Duan, Yan Yang, Liyuan Pan*, Xiabi Liu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper studies co-segmenting common semantic objects in a set of images. Existing works either rely on carefully engineered networks to mine implicit semantics in visual features or require extra data (i.e., classification labels) for training. In this paper, we leverage the contrastive language-image pre-training framework (CLIP) for the task. With a backbone segmentation network that processes each image from the set, we introduce semantics from CLIP into the backbone features, refining them in a coarse-to-fine manner with three key modules: (i) an image set feature correspondence module, encoding global consistent semantics of the image set; (ii) a CLIP interaction module, using CLIP-mined common semantics of the image set to refine the backbone feature; (iii) a CLIP regularization module, drawing CLIP towards co-segmentation, identifying and using the best CLIP semantic to regularize the backbone feature. Experiments on four standard co-segmentation benchmark datasets show that our method outperforms state-of-the-art methods.

Original languageEnglish
Article number111252
JournalPattern Recognition
Volume161
DOIs
Publication statusPublished - May 2025

Keywords

  • CLIP
  • Co-segmentation
  • Common semantic

Fingerprint

Dive into the research topics of 'LCCo: Lending CLIP to co-segmentation'. Together they form a unique fingerprint.

Cite this