Model Predictive Control-Based Value Estimation for Efficient Reinforcement Learning

Qizhen Wu, Kexin Liu, Lei Chen*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Reinforcement learning (RL) suffers from limitations in real practices primarily due to the number of required interactions with virtual environments. It results in a challenging problem because we are implausible to obtain a local optimal strategy with only a few attempts for many learning methods. Hereby, we design an improved RL method based on model predictive control that models the environment through a data-driven approach. Based on the learned environment model, it performs multistep prediction to estimate the value function and optimize the policy. The method demonstrates higher learning efficiency, faster convergent speed of strategies tending to the local optimal value, and less sample capacity space required by experience replay buffers. Experimental results, both in classic databases and in a dynamic obstacle-avoidance scenario for an unmanned aerial vehicle, validate the proposed approaches.

Original languageEnglish
Pages (from-to)63-72
Number of pages10
JournalIEEE Intelligent Systems
Volume39
Issue number3
DOIs
Publication statusPublished - 1 May 2024

Fingerprint

Dive into the research topics of 'Model Predictive Control-Based Value Estimation for Efficient Reinforcement Learning'. Together they form a unique fingerprint.

Cite this