Deep reinforce learning for joint optimization of condition-based maintenance and spare ordering

Shengang Hao, Jun Zheng, Jie Yang, Haipeng Sun, Quanxin Zhang, Li Zhang*, Nan Jiang, Yuanzhang Li

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

Condition-based maintenance (CBM) policy can avoid premature or late maintenance and reduce system failures and maintenance costs. Most existing CBM studies cannot solve the dimensional disaster problem in multi-component complex systems. Only some studies consider the constraint of maintenance resources when searching for the optimal maintenance policy, which is hard to apply to practical maintenance. This paper studies the joint optimization of the CBM policy and spare components inventory for the multi-component system in large state and action spaces. We use Markov Decision Process to model it and propose an improved deep reinforcement learning algorithm based on the stochastic policy and actor-critic framework. In this algorithm, factorization decomposes the system action into the linear combination of each component's action. The experimental results show that the algorithm proposed in this paper has better time performance and lower system cost compared with other benchmark algorithms. The training time of the former is only 28.5% and 9.12% of that of PPO and DQN algorithms, and the corresponding system cost is decreased by 17.39% and 27.95%, respectively. At the same time, our algorithm has good scalability and is suitable for solving Markov decision-making problems in large-scale state and action space.

源语言英语
页(从-至)85-100
页数16
期刊Information Sciences
634
DOI
出版状态已出版 - 7月 2023

指纹

探究 'Deep reinforce learning for joint optimization of condition-based maintenance and spare ordering' 的科研主题。它们共同构成独一无二的指纹。

引用此