CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image

Meiyan Liang*, Xing Jiang, Jie Cao*, Bo Li, Lin Wang*, Qinghui Chen, Cunlin Zhang, Yuejin Zhao

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Predicting labels of gigapixel whole-slide images (WSIs) and localizing regions of interest (ROIs) with high precision are of great interest in computational pathology. The existing methods are mainly based on multi-instance learning (MIL) approach and its variants. However, such algorithms primarily treat the instances of slides as independent samples, which cannot effectively describe the relationship between instances in WSI. This would cause only a subset of high-score instances can be located, which is not applicable in clinical scenarios. Context-aware attention fusion adaptive hypergraph convolution network (CAF-AHGCN) is proposed to adaptively establish the local and global topology of cropped image patches spatially arranged in the slides. The framework combines hypergraph embedding representation with attention-based MIL pooling aggregation in a hierarchical feature fusion manner, which fully preserves high-order spatial structural correlations of the patches to make a slide-level prediction. Here, an adaptive hypergraph convolutional network is designed to dynamically adjust the correlation strength between the hypergraph nodes as the model goes deeper. Poly-1 loss and residual connection are also applied to prevent over-smoothing and improve the generalization ability of deep CAF-AHGCN model. We verified the superiority of CAF-AHGCN on two datasets, CAMELYON16 and TCGA-NSCLC. The results showed that the ACC, AUC and F1 score predicted by our model outperform other state-of-the-art algorithms on both datasets. The heat maps obtained by CAF-AHGCN are highly consistent with the pixel-wise annotated label of the WSI. The results show that CAF-AHGCN not only achieves high-accuracy label prediction for WSI, but also provides patch-wise human-interpretable features in ROI localization heatmaps. The outstanding performance of CAF-AHGCN framework provides a new perspective for future clinical applications of computer-aided diagnosis and intelligent systems.

Original languageEnglish
Pages (from-to)8747-8765
Number of pages19
JournalVisual Computer
Volume40
Issue number12
DOIs
Publication statusPublished - Dec 2024

Keywords

  • Adaptive hypergraph convolution network
  • Computational pathology
  • Context-aware Information
  • Interpretability
  • Whole-slide image

Fingerprint

Dive into the research topics of 'CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image'. Together they form a unique fingerprint.

Cite this

Liang, M., Jiang, X., Cao, J., Li, B., Wang, L., Chen, Q., Zhang, C., & Zhao, Y. (2024). CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image. Visual Computer, 40(12), 8747-8765. https://doi.org/10.1007/s00371-024-03269-7