Abstract
The increasing vessel size and automation level have shifted the productivity bottleneck of automated container terminals from the terminal side to the yard side. Operating an automated container terminal (ACT) yard with a big number of automated guided vehicles (AGV) is challenging due to the complexity and dynamics of the system, severely affecting the operational efficiency and energy use efficiency. In this paper, a hybrid multi-AGV scheduling algorithm is proposed to minimise the energy consumption and the total makespan of AGVs in an ACT yard. This framework first models the AGV scheduling process as a Markov decision process (MDP). Furthermore, a novel scheduling algorithm called MDAS is proposed based on multi-agent deep deterministic policy gradient (MADDPG) to facilitate online real-time scheduling decision-making. Finally, simulation experiments show that the proposed method can effectively enhance the operational efficiency and energy use performance of AGVs in ACT yards of various scales by comparing with benchmarking methods.
| Original language | English |
|---|---|
| Pages (from-to) | 7722-7742 |
| Number of pages | 21 |
| Journal | International Journal of Production Research |
| Volume | 62 |
| Issue number | 21 |
| DOIs | |
| Publication status | Published - 2024 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 7 Affordable and Clean Energy
Keywords
- AGV real-time scheduling
- actor-critic networks
- container terminal yard
- deep reinforcement learning
- multi-agent systems
Fingerprint
Dive into the research topics of 'Real-time AGV scheduling optimisation method with deep reinforcement learning for energy-efficiency in the container terminal yard'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver