TY - JOUR
T1 - Minimax Q-learning design for H ∞ control of linear discrete-time systems
AU - Li, Xinxing
AU - Xi, Lele
AU - Zha, Wenzhong
AU - Peng, Zhihong
N1 - Publisher Copyright:
© 2022, Zhejiang University Press.
PY - 2022/3
Y1 - 2022/3
N2 - The H∞ control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H∞ controller due to the nonlinear Hamilton—Jacobi—Isaacs equation, even for linear systems. This study deals with the design of an H∞ controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning, learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H∞ load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H∞ load-frequency controller has good disturbance rejection performance.
AB - The H∞ control method is an effective approach for attenuating the effect of disturbances on practical systems, but it is difficult to obtain the H∞ controller due to the nonlinear Hamilton—Jacobi—Isaacs equation, even for linear systems. This study deals with the design of an H∞ controller for linear discrete-time systems. To solve the related game algebraic Riccati equation (GARE), a novel model-free minimax Q-learning method is developed, on the basis of an offline policy iteration algorithm, which is shown to be Newton’s method for solving the GARE. The proposed minimax Q-learning method, which employs off-policy reinforcement learning, learns the optimal control policies for the controller and the disturbance online, using only the state samples generated by the implemented behavior policies. Different from existing Q-learning methods, a novel gradient-based policy improvement scheme is proposed. We prove that the minimax Q-learning method converges to the saddle solution under initially admissible control policies and an appropriate positive learning rate, provided that certain persistence of excitation (PE) conditions are satisfied. In addition, the PE conditions can be easily met by choosing appropriate behavior policies containing certain excitation noises, without causing any excitation noise bias. In the simulation study, we apply the proposed minimax Q-learning method to design an H∞ load-frequency controller for an electrical power system generator that suffers from load disturbance, and the simulation results indicate that the obtained H∞ load-frequency controller has good disturbance rejection performance.
KW - Adaptive dynamic programming
KW - H control
KW - Minimax Q-learning
KW - Policy iteration
KW - Reinforcement learning
KW - TP13
KW - Zero-sum dynamic game
UR - http://www.scopus.com/inward/record.url?scp=85124331782&partnerID=8YFLogxK
U2 - 10.1631/FITEE.2000446
DO - 10.1631/FITEE.2000446
M3 - Article
AN - SCOPUS:85124331782
SN - 2095-9184
VL - 23
SP - 438
EP - 451
JO - Frontiers of Information Technology and Electronic Engineering
JF - Frontiers of Information Technology and Electronic Engineering
IS - 3
ER -