TY - JOUR
T1 - Anomaly Detection for Medical Images Using Self-Supervised and Translation-Consistent Features
AU - Zhao, He
AU - Li, Yuexiang
AU - He, Nanjun
AU - Ma, Kai
AU - Fang, Leyuan
AU - Li, Huiqi
AU - Zheng, Yefeng
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2021/12/1
Y1 - 2021/12/1
N2 - As the labeled anomalous medical images are usually difficult to acquire, especially for rare diseases, the deep learning based methods, which heavily rely on the large amount of labeled data, cannot yield a satisfactory performance. Compared to the anomalous data, the normal images without the need of lesion annotation are much easier to collect. In this paper, we propose an anomaly detection framework, namely SALAD , extracting S elf-supervised and tr A ns L ation-consistent features for A nomaly D etection. The proposed SALAD is a reconstruction-based method, which learns the manifold of normal data through an encode-and-reconstruct translation between image and latent spaces. In particular, two constraints (i.e., structure similarity loss and center constraint loss) are proposed to regulate the cross-space (i.e., image and feature) translation, which enforce the model to learn translation-consistent and representative features from the normal data. Furthermore, a self-supervised learning module is engaged into our framework to further boost the anomaly detection accuracy by deeply exploiting useful information from the raw normal data. An anomaly score, as a measure to separate the anomalous data from the healthy ones, is constructed based on the learned self-supervised-and-translation-consistent features. Extensive experiments are conducted on optical coherence tomography (OCT) and chest X-ray datasets. The experimental results demonstrate the effectiveness of our approach.
AB - As the labeled anomalous medical images are usually difficult to acquire, especially for rare diseases, the deep learning based methods, which heavily rely on the large amount of labeled data, cannot yield a satisfactory performance. Compared to the anomalous data, the normal images without the need of lesion annotation are much easier to collect. In this paper, we propose an anomaly detection framework, namely SALAD , extracting S elf-supervised and tr A ns L ation-consistent features for A nomaly D etection. The proposed SALAD is a reconstruction-based method, which learns the manifold of normal data through an encode-and-reconstruct translation between image and latent spaces. In particular, two constraints (i.e., structure similarity loss and center constraint loss) are proposed to regulate the cross-space (i.e., image and feature) translation, which enforce the model to learn translation-consistent and representative features from the normal data. Furthermore, a self-supervised learning module is engaged into our framework to further boost the anomaly detection accuracy by deeply exploiting useful information from the raw normal data. An anomaly score, as a measure to separate the anomalous data from the healthy ones, is constructed based on the learned self-supervised-and-translation-consistent features. Extensive experiments are conducted on optical coherence tomography (OCT) and chest X-ray datasets. The experimental results demonstrate the effectiveness of our approach.
KW - Medical image analysis
KW - anomaly detection
KW - feature space constraint
KW - generative adversarial networks
UR - http://www.scopus.com/inward/record.url?scp=85112218664&partnerID=8YFLogxK
U2 - 10.1109/TMI.2021.3093883
DO - 10.1109/TMI.2021.3093883
M3 - Article
C2 - 34197318
AN - SCOPUS:85112218664
SN - 0278-0062
VL - 40
SP - 3641
EP - 3651
JO - IEEE Transactions on Medical Imaging
JF - IEEE Transactions on Medical Imaging
IS - 12
ER -