PatchCL-AE: Anomaly detection for medical images using patch-wise contrastive learning-based auto-encoder

Shuai Lu, Weihang Zhang, Jia Guo, Hanruo Liu, Huiqi Li*, Ningli Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Anomaly detection is an important yet challenging task in medical image analysis. Most anomaly detection methods are based on reconstruction, but the performance of reconstruction-based methods is limited due to over-reliance on pixel-level losses. To address the limitation, we propose a patch-wise contrastive learning-based auto-encoder for medical anomaly detection. The key contribution is the patch-wise contrastive learning loss that provides supervision on local semantics to enforce semantic consistency between corresponding input–output patches. Contrastive learning pulls corresponding patch pairs closer while pushing non-corresponding ones apart between input and output, enabling the model to learn local normal features better and improve discriminability on anomalous regions. Additionally, we design an anomaly score based on local semantic discrepancies to pinpoint abnormalities by comparing feature difference rather than pixel variations. Extensive experiments on three public datasets (i.e., brain MRI, retinal OCT, and chest X-ray) achieve state-of-the-art performance, with our method achieving over 99% AUC on retinal and brain images. Both the contrastive patch-wise supervision and patch-discrepancy score provide targeted advancements to overcome the weaknesses in existing approaches.

Original languageEnglish
Article number102366
JournalComputerized Medical Imaging and Graphics
Volume114
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Contrastive learning
  • Medical anomaly detection
  • Patch loss

Fingerprint

Dive into the research topics of 'PatchCL-AE: Anomaly detection for medical images using patch-wise contrastive learning-based auto-encoder'. Together they form a unique fingerprint.

Cite this