Self-Supervised Interactive Image Segmentation

Qingxuan Shi*, Yihang Li, Huijun Di, Enyi Wu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Although interactive image segmentation techniques have made significant progress, supervised learning-based methods rely heavily on large-scale labeled data which is difficult to obtain in certain domains such as medicine, biology, etc. Models trained on natural images also struggle to achieve satisfactory results when directly applied to these domains. To solve this dilemma, we propose a Self-supervised Interactive Segmentation (SIS) method that achieves superior generalization performance. By clustering features from unlabeled data, we obtain classifiers that assign pseudo-labels to pixels in images. After refinement by super-pixel voting, these pseudo-labels are then used to train our segmentation network. To enable our network to better adapt to cross-domain images, we introduce correction learning and anti-forgetting regularization to conduct test-time adaptation. Our experiment results on five datasets show that our approach significantly outperforms other interactive segmentation methods across natural image datasets in the same conditions and achieves even better performance than some supervised methods when across to medical image domain. The code and models are available at https://github.com/leal0110/SIS.

Original languageEnglish
Pages (from-to)6797-6808
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume34
Issue number8
DOIs
Publication statusPublished - 2024

Keywords

  • Interactive image segmentation
  • generalization
  • self-supervised learning
  • test-time adaptation

Fingerprint

Dive into the research topics of 'Self-Supervised Interactive Image Segmentation'. Together they form a unique fingerprint.

Cite this