CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image

Meiyan Liang*, Xing Jiang, Jie Cao*, Bo Li, Lin Wang*, Qinghui Chen, Cunlin Zhang, Yuejin Zhao

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

Predicting labels of gigapixel whole-slide images (WSIs) and localizing regions of interest (ROIs) with high precision are of great interest in computational pathology. The existing methods are mainly based on multi-instance learning (MIL) approach and its variants. However, such algorithms primarily treat the instances of slides as independent samples, which cannot effectively describe the relationship between instances in WSI. This would cause only a subset of high-score instances can be located, which is not applicable in clinical scenarios. Context-aware attention fusion adaptive hypergraph convolution network (CAF-AHGCN) is proposed to adaptively establish the local and global topology of cropped image patches spatially arranged in the slides. The framework combines hypergraph embedding representation with attention-based MIL pooling aggregation in a hierarchical feature fusion manner, which fully preserves high-order spatial structural correlations of the patches to make a slide-level prediction. Here, an adaptive hypergraph convolutional network is designed to dynamically adjust the correlation strength between the hypergraph nodes as the model goes deeper. Poly-1 loss and residual connection are also applied to prevent over-smoothing and improve the generalization ability of deep CAF-AHGCN model. We verified the superiority of CAF-AHGCN on two datasets, CAMELYON16 and TCGA-NSCLC. The results showed that the ACC, AUC and F1 score predicted by our model outperform other state-of-the-art algorithms on both datasets. The heat maps obtained by CAF-AHGCN are highly consistent with the pixel-wise annotated label of the WSI. The results show that CAF-AHGCN not only achieves high-accuracy label prediction for WSI, but also provides patch-wise human-interpretable features in ROI localization heatmaps. The outstanding performance of CAF-AHGCN framework provides a new perspective for future clinical applications of computer-aided diagnosis and intelligent systems.

源语言英语
页(从-至)8747-8765
页数19
期刊Visual Computer
40
12
DOI
出版状态已出版 - 12月 2024

指纹

探究 'CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image' 的科研主题。它们共同构成独一无二的指纹。

引用此

Liang, M., Jiang, X., Cao, J., Li, B., Wang, L., Chen, Q., Zhang, C., & Zhao, Y. (2024). CAF-AHGCN: context-aware attention fusion adaptive hypergraph convolutional network for human-interpretable prediction of gigapixel whole-slide image. Visual Computer, 40(12), 8747-8765. https://doi.org/10.1007/s00371-024-03269-7