Comparative Analysis of 3D-Extended Deep Learning Approaches for Precise Brain MRI Segmentation

  • Linghao Sun
  • , Zhilin Zhang
  • , Jinglong Wu
  • , Lichang Yao
  • , Youshan Ma
  • , Ting Jiang
  • , Ziqi Liu
  • , Qi Dai*
  • , Xiujun Li
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Medical image segmentation is the key to disease diagnosis and treatment planning, and deep learning is widely used. For segmentation of brainstem and amygdala in human brain, five representative 2 D models (Fully Convolutional Networks (FCN), U-Net, DeepLab, UNet++, TransUNet) were extended to 3D and their performance was evaluated on OASIS dataset. The results showed that 3D U-net had the best performance, while the 3D variants of FCN, DeepLab and TransUnet were inferior to the Unet series. All models performed much better than the amygdala in the segmentation of brainstem. This study can provide a reference for constructing segmentation models of specific brain regions.

Original languageEnglish
Title of host publication2025 19th International Conference on Complex Medical Engineering, CME 2025
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages81-84
Number of pages4
ISBN (Electronic)9798331599997
DOIs
Publication statusPublished - 2025
Externally publishedYes
Event19th International Conference on Complex Medical Engineering, CME 2025 - Lanzhou, China
Duration: 1 Aug 20253 Aug 2025

Publication series

Name2025 19th International Conference on Complex Medical Engineering, CME 2025

Conference

Conference19th International Conference on Complex Medical Engineering, CME 2025
Country/TerritoryChina
CityLanzhou
Period1/08/253/08/25

Keywords

  • CNN
  • Deep learning
  • Medical image segmentation
  • Transformer
  • U-Net

Fingerprint

Dive into the research topics of 'Comparative Analysis of 3D-Extended Deep Learning Approaches for Precise Brain MRI Segmentation'. Together they form a unique fingerprint.

Cite this