Abstract
Medical image segmentation is of paramount importance in modern diagnostics. The progress of three-dimensional (3D) and multimodal segmentation is hindered by the scarcity of annotated datasets. Transfer learning from large datasets can improve performance. However, current pretrained models mainly target 2D image classification, creating a gap for 3D and multimodal segmentation. Direct transfer learning also requires 3D decoders, raising training and deployment costs. To address these challenges, we introduce a two-stage transfer learning strategy. In the first stage, knowledge from pretrained models is transferred to the segmentation task through closely related proxy tasks. In the second stage, a cross-dimensional transformation module is employed to enhance the compatibility of our model across various medical segmentation tasks. Experimental results on medical segmentation datasets and in modality missing scenarios demonstrate the effectiveness and versatility of our proposed method, showcasing its potential for diverse medical segmentation tasks.
| Original language | English |
|---|---|
| Article number | 131124 |
| Journal | Expert Systems with Applications |
| Volume | 308 |
| DOIs | |
| Publication status | Published - 1 May 2026 |
Keywords
- cross-Dimensional
- medical segmentation
- multimodal
- proxy task
- Transfer learning