Learning-Based Optimal Adaptive Resilient Control for Safe Autonomous Driving Under Cyberattacks

  • Guoqiang Li*
  • , Zhenyang Li
  • , Yu Lu
  • , Zhenpo Wang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The malicious cyberattack in connected automated vehicles leads to a major threat to the safety and security of autonomous driving. Different from traditional approaches which generally require perfect knowledge of system models and state measurements, in this article, a novel learning-based adaptive resilient control framework is proposed to defend against various false data injection attacks on the steering system to improve safe driving for automated vehicles. First, a robust nonlinear state estimation method is developed to provide accurate observation of unmeasured state variables for feedback control with limited onboard sensors. Then, an active model learning approach is proposed to present the vehicle driving behavior under different attacks to improve the dynamic model for state prediction over receding horizon. Finally, a data-driven attack-resilient method is designed to optimize the vehicle motion for autonomous driving. The derived control policy can be adapted to different scenarios for safety. MATLAB/Simulink and CarSim co-simulation platform is applied to evaluate the effectiveness and robustness of the proposed method on state estimation, model learning and accurate tracking control under various attack conditions.

Original languageEnglish
Pages (from-to)20858-20869
Number of pages12
JournalIEEE Internet of Things Journal
Volume12
Issue number12
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Adaptive resilient control
  • false data injection (FDI) attacks
  • model learning
  • state estimation

Fingerprint

Dive into the research topics of 'Learning-Based Optimal Adaptive Resilient Control for Safe Autonomous Driving Under Cyberattacks'. Together they form a unique fingerprint.

Cite this