Abstract
Contrastive learning, as an unsupervised technique, is widely employed in image segmentation to enhance segmentation performance even when working with small labeled datasets. However, generating positive and negative data pairs for medical image segmentation poses a challenge due to the presence of similar tissues and organs across different slices in datasets. To tackle this issue, we propose a novel contrastive learning strategy that leverages the relative position differences between image slices. Additionally, we combine global and local features to address this problem effectively. In order to enhance segmentation accuracy and reduce isolated mis-segmented regions, we employ a two-dimensional fully connected conditional random field for iterative optimization of the segmentation results. With only 10 labeled samples, our proposed method is able to achieve average dice scores of 0.876 and 0.899 on the public and private dataset heart segmentation tasks, surpassing the PCL method's 0.801 and 0.852. Experimental results on both public and private MRI datasets demonstrate that our proposed method yields significant improvements in medical segmentation tasks with limited annotated samples, outperforming existing semi-supervised and self-supervised techniques.
Original language | English |
---|---|
Article number | e22992 |
Journal | International Journal of Imaging Systems and Technology |
Volume | 34 |
Issue number | 2 |
DOIs | |
Publication status | Published - Mar 2024 |
Keywords
- contrastive learning
- medical image segmentation
- relative position