Model-Free Control Framework for Stability and Path-tracking of Autonomous Independent-Drive Vehicles

Yong Wang, Jianming Tang, Qin Li*, Yanan Zhao, Chen Sun, Hongwen He

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents a model-free integrated control framework that uses deep reinforcement learning (DRL) to improve the stability and safety of four-wheel independently driven autonomous electric vehicles. The proposed framework achieves precise path tracking and yaw motion control without relying on an accurate tire model. We introduce a novel hybrid DRL control strategy that effectively combines the Stanley controller with a DRL agent. This strategy enables trial-and-error learning through interaction with the vehicle environment, without requiring future state predictions or detailed mathematical models, ensuring adaptability, model independence, and superior real-time performance. Simulation results show that the strategy significantly improves lateral stability and tracking accuracy across various road conditions and speeds. Compared to the model predictive control, the model-free control method delivers better control performance and real-time responsiveness. Real-vehicle testing further validates the practical effectiveness of the proposed control strategy.

Original languageEnglish
JournalIEEE Transactions on Transportation Electrification
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • deep reinforcement learning
  • direct yaw moment control
  • four-wheel independently-driven vehicle
  • Model-free control
  • trajectory tracking control

Fingerprint

Dive into the research topics of 'Model-Free Control Framework for Stability and Path-tracking of Autonomous Independent-Drive Vehicles'. Together they form a unique fingerprint.

Cite this