TY - GEN
T1 - Reading Between the Channels
T2 - 33rd ACM International Conference on Multimedia, MM 2025
AU - Yuan, Xiaoyan
AU - Wang, Wei
AU - Chen, Junxin
AU - Hu, Xiping
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - Medical time series, such as Electroencephalogram (EEG) and Electrocardiogram (ECG), are widely used for disease detection, with multiple electrodes or sensors recording simultaneously. Accurately modeling inter-channel relationships is crucial for improving detection performance. Current methods mainly rely on data-driven approaches to model channel relationships, facing two challenges: (1) insufficient integration of medical prior knowledge, hindering the accurate representation of physiological correlations between channels, and (2) high temporal pattern similarity across channels, leading to feature redundancy and degraded classification performance. To address these issues, we introduce KEMed, a knowledge-augmented model for medical time series classification. The model incorporates medical textual prior knowledge by generating natural language descriptions for each channel and leveraging Pre-trained Language Model (PLM) for semantic representation, enabling precise identification of physiological and pathological similarities and differences between channels. Specifically, KEMed optimizes channel relationships through knowledge-guided clustering and weighting mechanisms and leverages Large Language Model (LLM) to capture spatiotemporal dependencies, thereby enhancing classification performance. Experimental results on five medical time series datasets demonstrate that KEMed consistently outperforms state-of-the-art methods, validating the effectiveness and superiority of knowledge augmentation in medical time series classification.
AB - Medical time series, such as Electroencephalogram (EEG) and Electrocardiogram (ECG), are widely used for disease detection, with multiple electrodes or sensors recording simultaneously. Accurately modeling inter-channel relationships is crucial for improving detection performance. Current methods mainly rely on data-driven approaches to model channel relationships, facing two challenges: (1) insufficient integration of medical prior knowledge, hindering the accurate representation of physiological correlations between channels, and (2) high temporal pattern similarity across channels, leading to feature redundancy and degraded classification performance. To address these issues, we introduce KEMed, a knowledge-augmented model for medical time series classification. The model incorporates medical textual prior knowledge by generating natural language descriptions for each channel and leveraging Pre-trained Language Model (PLM) for semantic representation, enabling precise identification of physiological and pathological similarities and differences between channels. Specifically, KEMed optimizes channel relationships through knowledge-guided clustering and weighting mechanisms and leverages Large Language Model (LLM) to capture spatiotemporal dependencies, thereby enhancing classification performance. Experimental results on five medical time series datasets demonstrate that KEMed consistently outperforms state-of-the-art methods, validating the effectiveness and superiority of knowledge augmentation in medical time series classification.
KW - channel relationship
KW - medical text knowledge
KW - medical time series
KW - pretrained language model
UR - https://www.scopus.com/pages/publications/105024072632
U2 - 10.1145/3746027.3755824
DO - 10.1145/3746027.3755824
M3 - Conference contribution
AN - SCOPUS:105024072632
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 8978
EP - 8987
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
Y2 - 27 October 2025 through 31 October 2025
ER -