Abstract
Background: Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning. Methods: A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance. Results: The proposed PEU-Net exhibited the best performance (DSC3d: 0.726 ± 0.204, HD95: 4.603 ± 4.579 mm), DSC2d (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125). Conclusions: The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.
Original language | English |
---|---|
Article number | 95 |
Journal | Radiation Oncology |
Volume | 20 |
Issue number | 1 |
DOIs | |
Publication status | Published - Dec 2025 |
Externally published | Yes |
Keywords
- Deep learning
- PET/MRI
- Tumors segmentation