Data-Driven MPC for Nonlinear Systems with Reinforcement Learning

Yiran Li, Qian Wang, Zhongqi Sun, Yuanqing Xia

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

Inspired by Willems and the co-authors' idea that continuously excited system trajectories can be used to represent the input-output behavior of discrete-time linear time-invariant (DT LTI) systems. We extend this idea to nonlinear systems. In this paper, we propose a data-driven model predictive control (MPC) scheme with reinforcement learning (RL) for unknown nonlinear systems. We utilize the input-output data of the system to form Hankel matrices to represent the system model implicitly. The accuracy of the prediction is improved by updating the data online. Another core idea of this scheme is to combine the standard MPC with RL to approximate the terminal cost function by TD-learning to ensure the closed-loop stability of the system. Simulation experiments on the cart-damper-spring system were used to demonstrate the feasibility of the proposed algorithm.

Original languageEnglish
Title of host publicationProceedings of the 41st Chinese Control Conference, CCC 2022
EditorsZhijun Li, Jian Sun
PublisherIEEE Computer Society
Pages2404-2409
Number of pages6
ISBN (Electronic)9789887581536
DOIs
Publication statusPublished - 2022
Event41st Chinese Control Conference, CCC 2022 - Hefei, China
Duration: 25 Jul 202227 Jul 2022

Publication series

NameChinese Control Conference, CCC
Volume2022-July
ISSN (Print)1934-1768
ISSN (Electronic)2161-2927

Conference

Conference41st Chinese Control Conference, CCC 2022
Country/TerritoryChina
CityHefei
Period25/07/2227/07/22

Keywords

  • Model predictive control (MPC)
  • data-driven method
  • nonlinear systems
  • reinforcement learning (RL)

Fingerprint

Dive into the research topics of 'Data-Driven MPC for Nonlinear Systems with Reinforcement Learning'. Together they form a unique fingerprint.

Cite this