TY - GEN
T1 - FAENet
T2 - 19th International Conference on Complex Medical Engineering, CME 2025
AU - Zhang, Yini
AU - Ma, Yunxiao
AU - Zhao, Fanghui
AU - Lai, Ruixuan
AU - Yan, Tianyi
AU - Liu, Tiantian
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - The construction of individualized head models from medical imaging data constitutes a critical focus in noninvasive brain stimulation (NIBS) research, with applications encompassing therapeutic electrical stimulation optimization and safety assessment. These computational head models are typically generated by segmenting magnetic resonance imaging (MRI) data into discrete anatomical tissues. However, conventional threedimensional (3D) convolutional neural networks (CNNs), despite their widespread adoption, exhibit high computational costs, parameter redundancy, and overfitting risks. In this study, we propose FAENet, an autoencoder-based architecture that integrates adaptive multi-branch decoding within a 2.5D segmentation framework for multi-class brain tissue segmentation. The network employs parallel decoder branches to enable multiresolution segmentation, with each branch independently optimized according to the textural heterogeneity across distinct anatomical structures to enhance localization accuracy. Experimental results demonstrate that the proposed model achieves a mean Dice similarity coefficient (DSC) of 9 2. 3% with a processing time of 6.63 seconds per volume. This performance indicates accurate reconstruction of tissue boundaries across all segmented classes while maintaining computational efficiency, thereby outperforming conventional 3D approaches.
AB - The construction of individualized head models from medical imaging data constitutes a critical focus in noninvasive brain stimulation (NIBS) research, with applications encompassing therapeutic electrical stimulation optimization and safety assessment. These computational head models are typically generated by segmenting magnetic resonance imaging (MRI) data into discrete anatomical tissues. However, conventional threedimensional (3D) convolutional neural networks (CNNs), despite their widespread adoption, exhibit high computational costs, parameter redundancy, and overfitting risks. In this study, we propose FAENet, an autoencoder-based architecture that integrates adaptive multi-branch decoding within a 2.5D segmentation framework for multi-class brain tissue segmentation. The network employs parallel decoder branches to enable multiresolution segmentation, with each branch independently optimized according to the textural heterogeneity across distinct anatomical structures to enhance localization accuracy. Experimental results demonstrate that the proposed model achieves a mean Dice similarity coefficient (DSC) of 9 2. 3% with a processing time of 6.63 seconds per volume. This performance indicates accurate reconstruction of tissue boundaries across all segmented classes while maintaining computational efficiency, thereby outperforming conventional 3D approaches.
KW - autoencoder
KW - convolutional neural network
KW - deep learning
KW - medical image segmentation
UR - https://www.scopus.com/pages/publications/105029670812
U2 - 10.1109/CME67420.2025.11239384
DO - 10.1109/CME67420.2025.11239384
M3 - Conference contribution
AN - SCOPUS:105029670812
T3 - 2025 19th International Conference on Complex Medical Engineering, CME 2025
SP - 343
EP - 347
BT - 2025 19th International Conference on Complex Medical Engineering, CME 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 August 2025 through 3 August 2025
ER -