Semi-Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation

Lei Zhu*, Kaiyuan Yang, Meihui Zhang, Ling Ling Chan, Teck Khim Ng, Beng Chin Ooi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

17 Citations (Scopus)

Abstract

Multi-modal learning using unpaired labeled data from multiple modalities to boost the performance of deep learning models on each individual modality has attracted a lot of interest in medical image segmentation recently. However, existing unpaired multi-modal learning methods require a considerable amount of labeled data from both modalities to obtain satisfying segmentation results which are not easy to obtain in reality. In this paper, we investigate the use of unlabeled data for label-efficient unpaired multi-modal learning, with a focus on the scenario when labeled data is scarce and unlabeled data is abundant. We term this new problem as Semi-Supervised Unpaired Multi-Modal Learning and thereupon, propose a novel deep co-training framework. Specifically, our framework consists of two segmentation networks, where we train one of them for each modality. Unlabeled data is effectively applied to learn two image translation networks for translating images across modalities. Thus, labeled data from one modality is employed for the training of the segmentation network in the other modality after image translation. To prevent overfitting under the label scarce scenario, we introduce a new semantic consistency loss to regularize the predictions of an image and its translation from the two segmentation networks to be semantically consistent. We further design a novel class-balanced deep co-training scheme to effectively leverage the valuable complementary information from both modalities to boost the segmentation performance. We verify the effectiveness of our framework with two medical image segmentation tasks and our framework outperforms existing methods significantly.

Original languageEnglish
Title of host publicationMedical Image Computing and Computer Assisted Intervention – MICCAI 2021 - 24th International Conference, Proceedings
EditorsMarleen de Bruijne, Philippe C. Cattin, Stéphane Cotin, Nicolas Padoy, Stefanie Speidel, Yefeng Zheng, Caroline Essert
PublisherSpringer Science and Business Media Deutschland GmbH
Pages394-404
Number of pages11
ISBN (Print)9783030871956
DOIs
Publication statusPublished - 2021
Event24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 - Virtual, Online
Duration: 27 Sept 20211 Oct 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12902 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
CityVirtual, Online
Period27/09/211/10/21

Keywords

  • Deep co-training
  • Segmentation
  • Semi-supervised learning
  • Unpaired multi-modal learning

Fingerprint

Dive into the research topics of 'Semi-Supervised Unpaired Multi-Modal Learning for Label-Efficient Medical Image Segmentation'. Together they form a unique fingerprint.

Cite this