Reinforcement Learning-Based Model Predictive Control for Discrete-Time Systems

Min Lin, Zhongqi Sun*, Yuanqing Xia, Jinhui Zhang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

7 引用 (Scopus)

摘要

This article proposes a novel reinforcement learning-based model predictive control (RLMPC) scheme for discrete-time systems. The scheme integrates model predictive control (MPC) and reinforcement learning (RL) through policy iteration (PI), where MPC is a policy generator and the RL technique is employed to evaluate the policy. Then the obtained value function is taken as the terminal cost of MPC, thus improving the generated policy. The advantage of doing so is that it rules out the need for the offline design paradigm of the terminal cost, the auxiliary controller, and the terminal constraint in traditional MPC. Moreover, RLMPC proposed in this article enables a more flexible choice of prediction horizon due to the elimination of the terminal constraint, which has great potential in reducing the computational burden. We provide a rigorous analysis of the convergence, feasibility, and stability properties of RLMPC. Simulation results show that RLMPC achieves nearly the same performance as traditional MPC in the control of linear systems and exhibits superiority over traditional MPC for nonlinear ones.

源语言英语
页(从-至)3312-3324
页数13
期刊IEEE Transactions on Neural Networks and Learning Systems
35
3
DOI
出版状态已出版 - 1 3月 2024

指纹

探究 'Reinforcement Learning-Based Model Predictive Control for Discrete-Time Systems' 的科研主题。它们共同构成独一无二的指纹。

引用此