Abstract
Deep learning approaches are widely used in medical image analysis and have shown impressive results on many analytical tasks. However, textual information related to medical images are often underutilized in existing methods, despite the great semantic value and potential multigranular guidance in medical image analysis. Meanwhile, many medical images, like magnetic resonance (MR) images are usually in 3D format consisting of multiple slices which contain more complex and redundant information, making them especially hard to be represented. In this paper, we propose a multimodal funsion framework for 3D medical image classification, which utilizes the medical text paired with the 3D medical image to guide the generation and aggregation of image features. Results show that our method significantly outperforms uni-modal and multimodal baseline methods. Ablation studies validate the effectiveness of each component, and visualization results also reveal the strong ability of our model on capturing fine-grained and coarse-grained information.
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 10th International Conference on Big Data Computing and Communications, BIGCOM 2024 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 42-49 |
Number of pages | 8 |
Edition | 2024 |
ISBN (Electronic) | 9798331509538 |
DOIs | |
Publication status | Published - 2024 |
Externally published | Yes |
Event | 10th International Conference on Big Data Computing and Communications, BIGCOM 2024 - Dalian, China Duration: 9 Aug 2024 → 11 Aug 2024 |
Conference
Conference | 10th International Conference on Big Data Computing and Communications, BIGCOM 2024 |
---|---|
Country/Territory | China |
City | Dalian |
Period | 9/08/24 → 11/08/24 |
Keywords
- 3D medical image classification
- multi-modal feature interaction and fusion
- vision-Language modeling