Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme

Yongfeng Lv, Xuemei Ren*, Shuangyi Hu, Hao Xu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

17 Citations (Scopus)

Abstract

A reinforcement learning (RL) based adaptive dynamic programming (ADP) is developed to learn the approximate optimal stabilization input of the servo mechanisms, where the unknown system dynamics are approximated with a three-layer neural network (NN) identifier. First, the servo mechanism model is constructed and a three-layer NN identifier is used to approximate the unknown servo system. The NN weights of both the hidden layer and output layer are synchronously tuned with an adaptive gradient law. An RL-based critic three-layer NN is then used to learn the optimal cost function, where NN weights of the first layer are set as constants, NN weights of the second layer are updated by minimizing the squared Hamilton-Jacobi-Bellman (HJB) error. The optimal stabilization input of the servomechanism is obtained based on the three-layer NN identifier and RL-based critic NN scheme, which can stabilize the motor speed from its initial value to the given value. Moreover, the convergence analysis of the identifier and RL-based critic NN is proved, the stability of the cost function with the proposed optimal input is analyzed. Finally, a servo mechanism model and a complex system are provided to verify the correctness of the proposed methods.

Original languageEnglish
Pages (from-to)2655-2665
Number of pages11
JournalInternational Journal of Control, Automation and Systems
Volume17
Issue number10
DOIs
Publication statusPublished - 1 Oct 2019

Keywords

  • Adaptive dynamic programming
  • neural networks
  • optimal control
  • reinforcement learning
  • servomechanisms

Fingerprint

Dive into the research topics of 'Approximate Optimal Stabilization Control of Servo Mechanisms based on Reinforcement Learning Scheme'. Together they form a unique fingerprint.

Cite this