Off-Policy Learning-Based Following Control of Cooperative Autonomous Vehicles Under Distributed Attacks

Yong Xu, Zheng Guang Wu*, Ya Jun Pan

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

This paper investigates the resilient distributed secure output path following control problem of heterogeneous autonomous ground vehicles (AGVs) subject to cyber attacks based on reinforcement learning algorithm. Most existing results are subject to the same attack models for all communication channels, however multiple channels launched by different attackers are considered in this paper. First, a predictor-acknowledgement clock algorithm for each vehicle is proposed to judge whether the communication channel among neighboring vehicles is attacked or not by receiving or transmitting an acknowledgement. Then, a resilient distributed predictor is proposed to predict the pinning vehicle's state for each vehicle. In addition, a resilient local control protocol consisting of the feedforward state provided by the predictor and the local feedback state of each vehicle is developed for the output path following problem, which is further converted to the optimal control problem by designing a discounted performance function. Discounted algebraic Riccati equations (AREs) are derived to address the optimal control problem. An off-policy reinforcement learning (RL) algorithm is put forward to learn the solution of discounted AREs online without any prior knowledge of vehicles' dynamics. It is shown that the RL-based output path following control problem of AGVs imposed by cyber attacks can be achieved in an optimal manner. Finally, a numerical example is provided to verify the effectiveness of theoretical analysis.

源语言英语
页(从-至)5120-5130
页数11
期刊IEEE Transactions on Intelligent Transportation Systems
24
5
DOI
出版状态已出版 - 1 5月 2023

指纹

探究 'Off-Policy Learning-Based Following Control of Cooperative Autonomous Vehicles Under Distributed Attacks' 的科研主题。它们共同构成独一无二的指纹。

引用此