TY - JOUR
T1 - MSTAN
T2 - A multi-scale temporal attention network for stock prediction
AU - Chen, Yunzhu
AU - Ye, Neng
AU - Zhang, Wenyu
AU - Song, Shenghui
AU - Li, Xiangming
N1 - Publisher Copyright:
© 2025
PY - 2026/4/25
Y1 - 2026/4/25
N2 - Stock price prediction remains a challenging task due to the inherent non-stationarity, multi-scale temporal dependencies, and complex cross-asset correlations in financial markets. In this paper, we propose MSTAN, a novel Multi-Scale Temporal Attention Network designed to model these spatiotemporal dependencies explicitly. MSTAN constructs multi-scale representations through a two-dimensional periodic reconstruction strategy and employs a Temporal Hybrid Attention mechanism to jointly learn local fluctuations and global trends jointly. Furthermore, MSTAN employs an adaptive module with channel-wise attention to dynamically capture inter-stock dependencies and integrates multi-scale features through a progressive coarse-to-fine fusion strategy. Extensive experiments across diverse datasets, including Chinese A-shares and the US market, demonstrate that MSTAN consistently outperforms state-of-the-art baselines, achieving MAE reductions of up to 28.6 %. Portfolio backtesting further validates its practical utility, showing superior risk-adjusted returns.
AB - Stock price prediction remains a challenging task due to the inherent non-stationarity, multi-scale temporal dependencies, and complex cross-asset correlations in financial markets. In this paper, we propose MSTAN, a novel Multi-Scale Temporal Attention Network designed to model these spatiotemporal dependencies explicitly. MSTAN constructs multi-scale representations through a two-dimensional periodic reconstruction strategy and employs a Temporal Hybrid Attention mechanism to jointly learn local fluctuations and global trends jointly. Furthermore, MSTAN employs an adaptive module with channel-wise attention to dynamically capture inter-stock dependencies and integrates multi-scale features through a progressive coarse-to-fine fusion strategy. Extensive experiments across diverse datasets, including Chinese A-shares and the US market, demonstrate that MSTAN consistently outperforms state-of-the-art baselines, achieving MAE reductions of up to 28.6 %. Portfolio backtesting further validates its practical utility, showing superior risk-adjusted returns.
KW - Attention mechanism
KW - Inter-stock dependencies
KW - Multi-scale fusion
KW - Multi-scale temporal modeling
KW - Stock price prediction
UR - https://www.scopus.com/pages/publications/105024856825
U2 - 10.1016/j.ins.2025.122992
DO - 10.1016/j.ins.2025.122992
M3 - Article
AN - SCOPUS:105024856825
SN - 0020-0255
VL - 733
JO - Information Sciences
JF - Information Sciences
M1 - 122992
ER -