Event-triggered reinforcement learning Q-function control based on spectral normalized neural networks

Research output: Contribution to journalArticlepeer-review

Abstract

In this article, we propose an event-triggered reinforcement learning (RL) Q-function control based on a spectral normalised neural network (NN) identifier. A low computational cost spectral normalised NN with improved linear activation function is applied to identify the unknown system, which greatly improves the identifier generalisation ability and decreases the sensitivity to the initial state. Then, an event-triggered system is designed to reduce the controller triggering number, and a Q-function is constructed to relax the persistence of excitation (PE) condition based on Hamilton-Jacobi-Bellman (HJB) equation and value function. The Q-function is approximated by a critic NN such that the optimal event-triggered control can be obtained. Moreover, the stability with the event-triggered Q-function is analysed. Finally, simulation and comparison results demonstrate the effectiveness of the proposed method.

Original languageEnglish
JournalInternational Journal of Systems Science
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • event-triggered control
  • optimal control
  • Reinforcement learning
  • spectral normalised neural network

Fingerprint

Dive into the research topics of 'Event-triggered reinforcement learning Q-function control based on spectral normalized neural networks'. Together they form a unique fingerprint.

Cite this