TY - JOUR
T1 - Distributed Real-Time Scheduling in Cloud Manufacturing by Deep Reinforcement Learning
AU - Zhang, Lixiang
AU - Yang, Chen
AU - Yan, Yan
AU - Hu, Yaoguang
N1 - Publisher Copyright:
© 2005-2012 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - With the extensive application of automated guided vehicles, real-time production scheduling considering logistics services in cloud manufacturing (CM) becomes an urgent problem. Thus, this study focuses on the distributed real-time scheduling (DRTS) of multiple services to respond to dynamic and customized orders. First, a DRTS framework with cloud-edge collaboration is proposed to improve performance and satisfy responsiveness, where distributed actors and one centralized learner are deployed in the edge and cloud layer, respectively. And, the DRTS problem is modeled as a semi-Markov decision process, where the processing services sequencing and logistics services assignment are considered simultaneously. Then, we developed a distributed dueling deep Q network (D3QN) with cloud-edge collaboration to optimize the weighted tardiness of jobs. The experimental results show that the proposed D3QN obtains lower weighted tardiness and shorter flow-time than other state-of-the-art algorithms. It indicates the proposed DRTS method has significant potential to provide efficient real-time decision-making in CM.
AB - With the extensive application of automated guided vehicles, real-time production scheduling considering logistics services in cloud manufacturing (CM) becomes an urgent problem. Thus, this study focuses on the distributed real-time scheduling (DRTS) of multiple services to respond to dynamic and customized orders. First, a DRTS framework with cloud-edge collaboration is proposed to improve performance and satisfy responsiveness, where distributed actors and one centralized learner are deployed in the edge and cloud layer, respectively. And, the DRTS problem is modeled as a semi-Markov decision process, where the processing services sequencing and logistics services assignment are considered simultaneously. Then, we developed a distributed dueling deep Q network (D3QN) with cloud-edge collaboration to optimize the weighted tardiness of jobs. The experimental results show that the proposed D3QN obtains lower weighted tardiness and shorter flow-time than other state-of-the-art algorithms. It indicates the proposed DRTS method has significant potential to provide efficient real-time decision-making in CM.
KW - Cloud-edge collaboration
KW - cloud manufacturing
KW - deep reinforcement learning
KW - distributed
KW - real-time scheduling
UR - http://www.scopus.com/inward/record.url?scp=85139378998&partnerID=8YFLogxK
U2 - 10.1109/TII.2022.3178410
DO - 10.1109/TII.2022.3178410
M3 - Article
AN - SCOPUS:85139378998
SN - 1551-3203
VL - 18
SP - 8999
EP - 9007
JO - IEEE Transactions on Industrial Informatics
JF - IEEE Transactions on Industrial Informatics
IS - 12
ER -