TY - GEN
T1 - Deep Reinforcement Learning for Multi-Period Facility Location
T2 - 32nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM SIGSPATIAL 2024
AU - Miao, Changhao
AU - Zhang, Yuntian
AU - Wu, Tongyu
AU - Deng, Fang
AU - Chen, Chen
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/11/22
Y1 - 2024/11/22
N2 - Facility location is a crucial aspect of spatial optimization with broad applications in urban planning. Specifically, the multi-period problem involves spatial and temporal information, making it challenging to solve. Existing research mainly focuses on heuristic methods, which depend on complex hand-crafted techniques. In this paper, we propose a novel method based on Deep Reinforcement Learning (DRL) to solve pk-median Dynamic Location Problem (DLP-pk). Different from classical heuristic methods, our method avoids intricate designs and considers the temporal impacts of decisions. We are the first to apply DRL to the multi-period facility location problem. Our method adopts the encoder-decoder architecture and utilizes a specialized structure to capture the temporal features across different periods. On the one hand, we introduce the Gated Recurrent Units (GRU) to encode temporal information, including dynamic coverage and dynamic costs. On the other hand, we design an attention-based decoder that allows the model to capture long-term dependencies in decision-making. Experimental results from small-size to large-size demonstrate that our method can quickly provide high-quality solutions without relying heavily on expert knowledge, offering opportunities for efficiently solving multi-period facility location problems. Additionally, our method is up to two orders of magnitude faster than the exact solver Gurobi and demonstrates great generalization abilities.
AB - Facility location is a crucial aspect of spatial optimization with broad applications in urban planning. Specifically, the multi-period problem involves spatial and temporal information, making it challenging to solve. Existing research mainly focuses on heuristic methods, which depend on complex hand-crafted techniques. In this paper, we propose a novel method based on Deep Reinforcement Learning (DRL) to solve pk-median Dynamic Location Problem (DLP-pk). Different from classical heuristic methods, our method avoids intricate designs and considers the temporal impacts of decisions. We are the first to apply DRL to the multi-period facility location problem. Our method adopts the encoder-decoder architecture and utilizes a specialized structure to capture the temporal features across different periods. On the one hand, we introduce the Gated Recurrent Units (GRU) to encode temporal information, including dynamic coverage and dynamic costs. On the other hand, we design an attention-based decoder that allows the model to capture long-term dependencies in decision-making. Experimental results from small-size to large-size demonstrate that our method can quickly provide high-quality solutions without relying heavily on expert knowledge, offering opportunities for efficiently solving multi-period facility location problems. Additionally, our method is up to two orders of magnitude faster than the exact solver Gurobi and demonstrates great generalization abilities.
KW - Deep Reinforcement Learning
KW - Facility Location
KW - Multi-Period
KW - Spatial Optimization
UR - http://www.scopus.com/inward/record.url?scp=85215116070&partnerID=8YFLogxK
U2 - 10.1145/3678717.3691249
DO - 10.1145/3678717.3691249
M3 - Conference contribution
AN - SCOPUS:85215116070
T3 - 32nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM SIGSPATIAL 2024
SP - 173
EP - 183
BT - 32nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM SIGSPATIAL 2024
A2 - Nascimento, Mario A.
A2 - Xiong, Li
A2 - Zufle, Andreas
A2 - Chiang, Yao-Yi
A2 - Eldawy, Ahmed
A2 - Kroger, Peer
PB - Association for Computing Machinery, Inc
Y2 - 29 October 2024 through 1 November 2024
ER -