TY - GEN
T1 - PrixMatch
T2 - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
AU - Yang, Yulin
AU - Song, Hong
AU - Lin, Yucong
AU - Shao, Long
AU - Fan, Jingfan
AU - Fu, Tianyu
AU - Ai, Danni
AU - Xiao, Deqiang
AU - Yang, Jian
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Semi-supervised medical image segmentation has made significant strides, yet most existing methods are confined to single-modality data, limiting both the volume of data and the generalizability of the models. Multi-modal data can provide richer information, expand the dataset and enhance model robustness. However, integrating multi-modal learning into semi-supervised medical image segmentation presents challenges, primarily in how to deal with the scarcity of labels and alignment across different modalities simultaneously. In this paper, we propose PrixMatch, a multi-modal semi-supervised model with a teacher-student strategy for medical image segmentation. Initially, we propose a cross-modal data augmentation strategy, which randomly exchanges image blocks of the same location between different modalities, to guide the student model to learn cross-modal consistency without the need for additional network modules. Secondly, we design a cross-modal adaptive pseudo-label threshold setting strategy, which can align the prior anatomical knowledge of different modalities, and combine the modal-aligned prior knowledge and model learning state to filter the pseudo-labels at the pixel-level, flexibly alleviating the confirmation bias that occurs during semi-supervised training. Experiments demonstrate that PrixMatch achieves a Dice Similarity Coefficient (DSC) of 87.2% on the BTCV (CT) and CHAOS (MR) multi-modal datasets with only 10% labeling ratio, bringing nearly 5.5% improvement over the latest state-of-the-art method.
AB - Semi-supervised medical image segmentation has made significant strides, yet most existing methods are confined to single-modality data, limiting both the volume of data and the generalizability of the models. Multi-modal data can provide richer information, expand the dataset and enhance model robustness. However, integrating multi-modal learning into semi-supervised medical image segmentation presents challenges, primarily in how to deal with the scarcity of labels and alignment across different modalities simultaneously. In this paper, we propose PrixMatch, a multi-modal semi-supervised model with a teacher-student strategy for medical image segmentation. Initially, we propose a cross-modal data augmentation strategy, which randomly exchanges image blocks of the same location between different modalities, to guide the student model to learn cross-modal consistency without the need for additional network modules. Secondly, we design a cross-modal adaptive pseudo-label threshold setting strategy, which can align the prior anatomical knowledge of different modalities, and combine the modal-aligned prior knowledge and model learning state to filter the pseudo-labels at the pixel-level, flexibly alleviating the confirmation bias that occurs during semi-supervised training. Experiments demonstrate that PrixMatch achieves a Dice Similarity Coefficient (DSC) of 87.2% on the BTCV (CT) and CHAOS (MR) multi-modal datasets with only 10% labeling ratio, bringing nearly 5.5% improvement over the latest state-of-the-art method.
KW - data augmentation
KW - multi-modal medical image segmentation
KW - prior anatomical knowledge
KW - semi-supervised learning
UR - https://www.scopus.com/pages/publications/85217282977
U2 - 10.1109/BIBM62325.2024.10821734
DO - 10.1109/BIBM62325.2024.10821734
M3 - Conference contribution
AN - SCOPUS:85217282977
T3 - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
SP - 2766
EP - 2771
BT - Proceedings - 2024 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024
A2 - Cannataro, Mario
A2 - Zheng, Huiru
A2 - Gao, Lin
A2 - Cheng, Jianlin
A2 - de Miranda, Joao Luis
A2 - Zumpano, Ester
A2 - Hu, Xiaohua
A2 - Cho, Young-Rae
A2 - Park, Taesung
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 3 December 2024 through 6 December 2024
ER -