TY - JOUR
T1 - Emergency Scheduling of Aerial Vehicles via Graph Neural Neighborhood Search
AU - Guo, Tong
AU - Mei, Yi
AU - Du, Wenbo
AU - Lv, Yisheng
AU - Li, Yumeng
AU - Song, Tao
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2025
Y1 - 2025
N2 - The thriving advances in autonomous vehicles and aviation have enabled the efficient implementation of aerial last-mile delivery services to meet the pressing demand for urgent relief supply distribution. Variable Neighborhood Search (VNS) is a promising technique for aerial emergency scheduling. However, the existing VNS methods usually exhaustively explore all considered neighborhoods with a prefixed order, leading to an inefficient search process and slow convergence speed. To address this issue, this paper proposes a novel graph neural neighborhood search algorithm, which includes an online reinforcement learning (RL) agent that guides the search process by selecting the most appropriate low-level local search operators based on the search state. We develop a dual-graph neural representation learning method to extract comprehensive and informative feature representations from the search state. Besides, we propose a reward-shaping policy learning method to address the decaying reward issue along the search process. Extensive experiments conducted across various benchmark instances demonstrate that the proposed algorithm significantly outperforms the state-of-the-art approaches. Further investigations validate the effectiveness of the newly designed knowledge guidance scheme and the learned feature representations.
AB - The thriving advances in autonomous vehicles and aviation have enabled the efficient implementation of aerial last-mile delivery services to meet the pressing demand for urgent relief supply distribution. Variable Neighborhood Search (VNS) is a promising technique for aerial emergency scheduling. However, the existing VNS methods usually exhaustively explore all considered neighborhoods with a prefixed order, leading to an inefficient search process and slow convergence speed. To address this issue, this paper proposes a novel graph neural neighborhood search algorithm, which includes an online reinforcement learning (RL) agent that guides the search process by selecting the most appropriate low-level local search operators based on the search state. We develop a dual-graph neural representation learning method to extract comprehensive and informative feature representations from the search state. Besides, we propose a reward-shaping policy learning method to address the decaying reward issue along the search process. Extensive experiments conducted across various benchmark instances demonstrate that the proposed algorithm significantly outperforms the state-of-the-art approaches. Further investigations validate the effectiveness of the newly designed knowledge guidance scheme and the learned feature representations.
KW - adaptive operator selection
KW - Combinortorial optimization
KW - reinforcement learning
KW - variable neighborhood search
UR - http://www.scopus.com/inward/record.url?scp=85215413731&partnerID=8YFLogxK
U2 - 10.1109/TAI.2025.3528381
DO - 10.1109/TAI.2025.3528381
M3 - Article
AN - SCOPUS:85215413731
SN - 2691-4581
JO - IEEE Transactions on Artificial Intelligence
JF - IEEE Transactions on Artificial Intelligence
ER -