Transferring cross-Dimensional knowledge via proxy task for medical image segmentation

  • Cunhan Guo
  • , Heyan Huang
  • , Yang Hao Zhou
  • , Danjie Han
  • , Changsen Yuan*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Medical image segmentation is of paramount importance in modern diagnostics. The progress of three-dimensional (3D) and multimodal segmentation is hindered by the scarcity of annotated datasets. Transfer learning from large datasets can improve performance. However, current pretrained models mainly target 2D image classification, creating a gap for 3D and multimodal segmentation. Direct transfer learning also requires 3D decoders, raising training and deployment costs. To address these challenges, we introduce a two-stage transfer learning strategy. In the first stage, knowledge from pretrained models is transferred to the segmentation task through closely related proxy tasks. In the second stage, a cross-dimensional transformation module is employed to enhance the compatibility of our model across various medical segmentation tasks. Experimental results on medical segmentation datasets and in modality missing scenarios demonstrate the effectiveness and versatility of our proposed method, showcasing its potential for diverse medical segmentation tasks.

Original languageEnglish
Article number131124
JournalExpert Systems with Applications
Volume308
DOIs
Publication statusPublished - 1 May 2026

Keywords

  • cross-Dimensional
  • medical segmentation
  • multimodal
  • proxy task
  • Transfer learning

Fingerprint

Dive into the research topics of 'Transferring cross-Dimensional knowledge via proxy task for medical image segmentation'. Together they form a unique fingerprint.

Cite this