Reinforcement Learning for Quantization of Boundary Control Inputs: A Comparison of PPO-based Strategies

Yibo Wang*, Wen Kang

*此作品的通讯作者

科研成果: 书/报告/会议事项章节会议稿件同行评审

摘要

This paper investigates the boundary stabilization problem for the Korteweg-de Vries (KdV) system with quantized control inputs via the deep reinforcement learning (DRL) approach. To examine the impact of different placements of the quantizer on stabilization performance, we discuss two scenarios: the quantizer placed in the environment and in the agent. In the case of 'introducing the quantizer into the agent', we further explore two variations: optimizing the parameters of the discretized continuous distribution and directly optimizing the parameters of the discrete distribution. Finally, simulation results demonstrate that the proposed proximal policy optimization (PPO)-based strategies can train DRL controllers that effectively stabilize the target system, with the approach directly learning the parameters of the discrete distribution achieving the highest stabilization efficiency among the quantization-based scenarios.

源语言英语
主期刊名Proceedings of the 43rd Chinese Control Conference, CCC 2024
编辑Jing Na, Jian Sun
出版商IEEE Computer Society
1093-1098
页数6
ISBN(电子版)9789887581581
DOI
出版状态已出版 - 2024
活动43rd Chinese Control Conference, CCC 2024 - Kunming, 中国
期限: 28 7月 202431 7月 2024

出版系列

姓名Chinese Control Conference, CCC
ISSN(印刷版)1934-1768
ISSN(电子版)2161-2927

会议

会议43rd Chinese Control Conference, CCC 2024
国家/地区中国
Kunming
时期28/07/2431/07/24

指纹

探究 'Reinforcement Learning for Quantization of Boundary Control Inputs: A Comparison of PPO-based Strategies' 的科研主题。它们共同构成独一无二的指纹。

引用此