Enhancing Safety in Autonomous Racing With Constrained Reinforcement Learning

Kai Yu, Mengyin Fu, Ting Zhang, Yi Yang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous racing serves as a critical benchmark for advancing autonomous vehicle technologies in challenging scenarios. Most traditional methods rely on complex vehicle dynamics models, posing significant challenges in modeling and computation. While reinforcement learning (RL) offers a promising alternative, ensuring safety remains a major challenge for real-world deployment. In this letter, we introduce safe RL into autonomous racing to reduce collisions. By formulating the problem as a Constrained Markov Decision Process (CMDP), agents are trained using two constrained RL algorithms. To further enhance safety, we propose a shielding framework based on vehicle rollover dynamics to limit the speed command. Experimental results in the F1TENTH simulator demonstrate the effectiveness of our method in improving safety while achieving competitive racing performance. We deploy different agents on a real 1/10-scale racecar without fine-tuning. With the maximum speed set to 4 m/s, our method successfully completes the track without colliding with the boundaries.

Original languageEnglish
Pages (from-to)6448-6455
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume10
Issue number6
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • Autonomous racing
  • constrained reinforcement learning
  • safety
  • shielding

Cite this