Dynamic flexible job-shop scheduling by multi-agent reinforcement learning with reward-shaping

Lixiang Zhang, Yan Yan, Chen Yang, Yaoguang Hu*

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Achieving mass personalization presents significant challenges in performance and adaptability when solving dynamic flexible job-shop scheduling problems (DFJSP). Previous studies have struggled to achieve high performance in variable contexts. To tackle this challenge, this paper introduces a novel scheduling strategy founded on heterogeneous multi-agent reinforcement learning. This strategy facilitates centralized optimization and decentralized decision-making through collaboration among job and machine agents while employing historical experiences to support data-driven learning. The DFJSP with transportation time is initially formulated as heterogeneous multi-agent partial observation Markov Decision Processes. This formulation outlines the interactions between decision-making agents and the environment, incorporating a reward-shaping mechanism aimed at organizing job and machine agents to minimize the weighted tardiness of dynamic jobs. Then, we develop a dueling double deep Q-network algorithm incorporating the reward-shaping mechanism to ascertain the optimal strategies for machine allocation and job sequencing in DFJSP. This approach addresses the sparse reward issue and accelerates the learning process. Finally, the efficiency of the proposed method is verified and validated through numerical experiments, which demonstrate its superiority in reducing the weighted tardiness of dynamic jobs when compared to state-of-the-art baselines. The proposed method exhibits remarkable adaptability in encountering new scenarios, underscoring the benefits of adopting a heterogeneous multi-agent reinforcement learning-based scheduling approach in navigating dynamic and flexible challenges.

源语言英语
文章编号102872
期刊Advanced Engineering Informatics
62
DOI
出版状态已出版 - 10月 2024

指纹

探究 'Dynamic flexible job-shop scheduling by multi-agent reinforcement learning with reward-shaping' 的科研主题。它们共同构成独一无二的指纹。

引用此