TY - JOUR
T1 - OrbitNet—A fully automated orbit multi-organ segmentation model based on transformer in CT images
AU - Li, Wentao
AU - Song, Hong
AU - Li, Zongyu
AU - Lin, Yucong
AU - Shi, Jieliang
AU - Yang, Jian
AU - Wu, Wencan
N1 - Publisher Copyright:
© 2023 Elsevier Ltd
PY - 2023/3
Y1 - 2023/3
N2 - The delineation of orbital organs is a vital step in orbital diseases diagnosis and preoperative planning. However, an accurate multi-organ segmentation is still a clinical problem which suffers from two limitations. First, the contrast of soft tissue is relatively low. It usually cannot clearly show the boundaries of organs. Second, the optic nerve and the rectus muscle are difficult to distinguish because they are spatially adjacent and have similar geometry. To address these challenges, we propose the OrbitNet model to automatically segment orbital organs in CT images. Specifically, we present a global feature extraction module based on the transformer architecture called FocusTrans encoder, which enhance the ability to extract boundary features. To make the network focus on the extraction of edge features in the optic nerve and rectus muscle, the SA block is used to replace the convolution block in the decoding stage. In addition, we use the structural similarity measure (SSIM) loss as a part of the hybrid loss function to learn the edge differences of the organs better. OrbitNet has been trained and tested on the CT dataset collected by the Eye Hospital of Wenzhou Medical University. The experimental results show that our proposed model achieved superior results. The average Dice Similarity Coefficient (DSC) is 83.9%, the value of average 95% Hausdorff Distance (HD95) is 1.62 mm, and the value of average Symmetric Surface Distance (ASSD) is 0.47 mm. Our model also has good performance on the MICCAI 2015 challenge dataset.
AB - The delineation of orbital organs is a vital step in orbital diseases diagnosis and preoperative planning. However, an accurate multi-organ segmentation is still a clinical problem which suffers from two limitations. First, the contrast of soft tissue is relatively low. It usually cannot clearly show the boundaries of organs. Second, the optic nerve and the rectus muscle are difficult to distinguish because they are spatially adjacent and have similar geometry. To address these challenges, we propose the OrbitNet model to automatically segment orbital organs in CT images. Specifically, we present a global feature extraction module based on the transformer architecture called FocusTrans encoder, which enhance the ability to extract boundary features. To make the network focus on the extraction of edge features in the optic nerve and rectus muscle, the SA block is used to replace the convolution block in the decoding stage. In addition, we use the structural similarity measure (SSIM) loss as a part of the hybrid loss function to learn the edge differences of the organs better. OrbitNet has been trained and tested on the CT dataset collected by the Eye Hospital of Wenzhou Medical University. The experimental results show that our proposed model achieved superior results. The average Dice Similarity Coefficient (DSC) is 83.9%, the value of average 95% Hausdorff Distance (HD95) is 1.62 mm, and the value of average Symmetric Surface Distance (ASSD) is 0.47 mm. Our model also has good performance on the MICCAI 2015 challenge dataset.
KW - CT images
KW - Hybrid loss function
KW - Orbital organ segmentation
KW - SSIM
KW - Transformer architecture
UR - http://www.scopus.com/inward/record.url?scp=85148329526&partnerID=8YFLogxK
U2 - 10.1016/j.compbiomed.2023.106628
DO - 10.1016/j.compbiomed.2023.106628
M3 - Article
C2 - 36809695
AN - SCOPUS:85148329526
SN - 0010-4825
VL - 155
JO - Computers in Biology and Medicine
JF - Computers in Biology and Medicine
M1 - 106628
ER -