TY - JOUR
T1 - Online Reinforcement Learning Control by Direct Heuristic Dynamic Programming
T2 - From Time-Driven to Event-Driven
AU - Zhao, Qingtao
AU - Si, Jennie
AU - Sun, Jian
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2022/8/1
Y1 - 2022/8/1
N2 - In this work, time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives. Among existing approximate dynamic programming (ADP) and reinforcement learning (RL) algorithms, the direct heuristic dynamic programming (dHDP) has been shown an effective tool as demonstrated in solving several complex learning control problems. It continuously updates the control policy and the critic as system states continuously evolve. It is therefore desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise. Toward this goal, we propose a new event-driven dHDP. By constructing a Lyapunov function candidate, we prove the uniformly ultimately boundedness (UUB) of the system states and the weights in the critic and the control policy networks. Consequently, we show the approximate control and cost-to-go function approaching Bellman optimality within a finite bound. We also illustrate how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
AB - In this work, time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives. Among existing approximate dynamic programming (ADP) and reinforcement learning (RL) algorithms, the direct heuristic dynamic programming (dHDP) has been shown an effective tool as demonstrated in solving several complex learning control problems. It continuously updates the control policy and the critic as system states continuously evolve. It is therefore desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise. Toward this goal, we propose a new event-driven dHDP. By constructing a Lyapunov function candidate, we prove the uniformly ultimately boundedness (UUB) of the system states and the weights in the critic and the control policy networks. Consequently, we show the approximate control and cost-to-go function approaching Bellman optimality within a finite bound. We also illustrate how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
KW - Direct heuristic dynamic programming (dHDP)
KW - event-driven/time-driven dHDP
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85100703711&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2021.3053037
DO - 10.1109/TNNLS.2021.3053037
M3 - Article
C2 - 33534714
AN - SCOPUS:85100703711
SN - 2162-237X
VL - 33
SP - 4139
EP - 4144
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 8
ER -