Improving image segmentation with contextual and structural similarity

Xiaoyang Chen, Qin Liu, Hannah H. Deng, Tianshu Kuang, Henry Hung Ying Lin, Deqiang Xiao, Jaime Gateno, James J. Xia, Pew Thian Yap*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Deep learning models for medical image segmentation are usually trained with voxel-wise losses, e.g., cross-entropy loss, focusing on unary supervision without considering inter-voxel relationships. This oversight potentially leads to semantically inconsistent predictions. Here, we propose a contextual similarity loss (CSL) and a structural similarity loss (SSL) to explicitly and efficiently incorporate inter-voxel relationships for improved performance. The CSL promotes consistency in predicted object categories for each image sub-region compared to ground truth. The SSL enforces compatibility between the predictions of voxel pairs by computing pair-wise distances between them, ensuring that voxels of the same class are close together whereas those from different classes are separated by a wide margin in the distribution space. The effectiveness of the CSL and SSL is evaluated using a clinical cone-beam computed tomography (CBCT) dataset of patients with various craniomaxillofacial (CMF) deformities and a public pancreas dataset. Experimental results show that the CSL and SSL outperform state-of-the-art regional loss functions in preserving segmentation semantics.

源语言英语
文章编号110489
期刊Pattern Recognition
152
DOI
出版状态已出版 - 8月 2024
已对外发布

指纹

探究 'Improving image segmentation with contextual and structural similarity' 的科研主题。它们共同构成独一无二的指纹。

引用此