TY - CONF
T1 - Mutual Contrastive Low-rank Learning to Disentangle Whole Slide Image Representations for Glioma Grading
AU - Zhang, Lipei
AU - Wei, Yiran
AU - Fu, Ying
AU - Price, Stephen
AU - Schönlieb, Carola Bibiane
AU - Li, Chao
N1 - Publisher Copyright:
© 2022. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms.
PY - 2022
Y1 - 2022
N2 - Whole slide images (WSI) provide valuable phenotypic information for histological assessment and malignancy grading of tumors. The WSI-based grading promises to provide rapid diagnostic support and facilitate digital health. Currently, the most commonly used WSIs are derived from formalin-fixed paraffin-embedded (FFPE) and Frozen section. The majority of automatic tumor grading models are developed based on FFPE sections, which could be affected by the artifacts introduced from tissue processing. The frozen section exists problems such as low quality that might influence training within single modality as well. To overcome these problems in the single modal training and achieve better multi-modal and discriminative representation disentanglement in brain tumor, we propose a mutual contrastive low-rank learning (MCL) scheme to integrate FFPE and frozen sections for glioma grading. We first design a mutual learning scheme to jointly optimize the model training based on FFPE and frozen sections. In this proposed scheme, we design a normalized modality contrastive loss (NMC-loss), which could promote to disentangle multi-modality complementary representation of FFPE and frozen sections from the same patient. To reduce intra-class variance, and increase inter-class margin at intra- and inter-patient levels, we conduct a low-rank (LR) loss. Our experiments show that the proposed scheme achieves better performance than the model trained based on each single modality or mixed modalities without reducing the efficiency of inference, and even improves the feature extraction in classical attention-based multiple instances learning methods (MIL). The combination of NMC-loss and low-rank loss outperforms other typical contrastive loss functions. The source code is in https://github.com/uceclz0/MCL_glioma_grading.
AB - Whole slide images (WSI) provide valuable phenotypic information for histological assessment and malignancy grading of tumors. The WSI-based grading promises to provide rapid diagnostic support and facilitate digital health. Currently, the most commonly used WSIs are derived from formalin-fixed paraffin-embedded (FFPE) and Frozen section. The majority of automatic tumor grading models are developed based on FFPE sections, which could be affected by the artifacts introduced from tissue processing. The frozen section exists problems such as low quality that might influence training within single modality as well. To overcome these problems in the single modal training and achieve better multi-modal and discriminative representation disentanglement in brain tumor, we propose a mutual contrastive low-rank learning (MCL) scheme to integrate FFPE and frozen sections for glioma grading. We first design a mutual learning scheme to jointly optimize the model training based on FFPE and frozen sections. In this proposed scheme, we design a normalized modality contrastive loss (NMC-loss), which could promote to disentangle multi-modality complementary representation of FFPE and frozen sections from the same patient. To reduce intra-class variance, and increase inter-class margin at intra- and inter-patient levels, we conduct a low-rank (LR) loss. Our experiments show that the proposed scheme achieves better performance than the model trained based on each single modality or mixed modalities without reducing the efficiency of inference, and even improves the feature extraction in classical attention-based multiple instances learning methods (MIL). The combination of NMC-loss and low-rank loss outperforms other typical contrastive loss functions. The source code is in https://github.com/uceclz0/MCL_glioma_grading.
UR - http://www.scopus.com/inward/record.url?scp=85174702721&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85174702721
T2 - 33rd British Machine Vision Conference Proceedings, BMVC 2022
Y2 - 21 November 2022 through 24 November 2022
ER -