TY - JOUR
T1 - Model-Free Control Framework for Stability and Path-tracking of Autonomous Independent-Drive Vehicles
AU - Wang, Yong
AU - Tang, Jianming
AU - Li, Qin
AU - Zhao, Yanan
AU - Sun, Chen
AU - He, Hongwen
N1 - Publisher Copyright:
© 2015 IEEE.
PY - 2025
Y1 - 2025
N2 - This paper presents a model-free integrated control framework that uses deep reinforcement learning (DRL) to improve the stability and safety of four-wheel independently driven autonomous electric vehicles. The proposed framework achieves precise path tracking and yaw motion control without relying on an accurate tire model. We introduce a novel hybrid DRL control strategy that effectively combines the Stanley controller with a DRL agent. This strategy enables trial-and-error learning through interaction with the vehicle environment, without requiring future state predictions or detailed mathematical models, ensuring adaptability, model independence, and superior real-time performance. Simulation results show that the strategy significantly improves lateral stability and tracking accuracy across various road conditions and speeds. Compared to the model predictive control, the model-free control method delivers better control performance and real-time responsiveness. Real-vehicle testing further validates the practical effectiveness of the proposed control strategy.
AB - This paper presents a model-free integrated control framework that uses deep reinforcement learning (DRL) to improve the stability and safety of four-wheel independently driven autonomous electric vehicles. The proposed framework achieves precise path tracking and yaw motion control without relying on an accurate tire model. We introduce a novel hybrid DRL control strategy that effectively combines the Stanley controller with a DRL agent. This strategy enables trial-and-error learning through interaction with the vehicle environment, without requiring future state predictions or detailed mathematical models, ensuring adaptability, model independence, and superior real-time performance. Simulation results show that the strategy significantly improves lateral stability and tracking accuracy across various road conditions and speeds. Compared to the model predictive control, the model-free control method delivers better control performance and real-time responsiveness. Real-vehicle testing further validates the practical effectiveness of the proposed control strategy.
KW - deep reinforcement learning
KW - direct yaw moment control
KW - four-wheel independently-driven vehicle
KW - Model-free control
KW - trajectory tracking control
UR - http://www.scopus.com/inward/record.url?scp=105004889398&partnerID=8YFLogxK
U2 - 10.1109/TTE.2025.3563395
DO - 10.1109/TTE.2025.3563395
M3 - Article
AN - SCOPUS:105004889398
SN - 2332-7782
JO - IEEE Transactions on Transportation Electrification
JF - IEEE Transactions on Transportation Electrification
ER -