Reinforcement Learning-Based Attitude Stabilization Control for Robot Astronauts

  • Liping Fang*
  • , Liang Tang*
  • , Jun Zhang*
  • , Quan Hu
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

Robotic astronauts could play a crucial role in long-term duty and on-orbit experiments aboard space stations in future missions. Achieving attitude stabilization is critical for performing precision tasks. However, for robotic astronauts with high degrees of freedom, intricate motion coupling, and rich interactions, attitude control remains a significant challenge. A reinforcement learning-based framework is proposed to overcome this limitation, integrating curriculum learning with an Asymmetric Actor-Critic architecture and Proximal Policy Optimization (PPO). The approach is trained and validated within NVIDIA Isaac Gym, a high-performance GPU-accelerated physics simulation platform. The results demonstrate that the proposed policy enables rapid convergence of the robot's linear velocity, angular velocity, and attitude deviation, ensuring stable performance. Additionally, it shows strong generalization and robustness across varying initial conditions and curriculum levels. In conclusion, this strategy successfully achieves attitude stabilization control for robotic astronauts in space station environments, providing technical support for future on-orbit missions.

Original languageEnglish
Pages (from-to)2284-2289
Number of pages6
JournalIFAC-PapersOnLine
Volume59
Issue number20
DOIs
Publication statusPublished - 1 Aug 2025
Event23th IFAC Symposium on Automatic Control in Aerospace, ACA 2025 - Harbin, China
Duration: 2 Aug 20256 Aug 2025

Keywords

  • Attitude stabilization control
  • Curriculum learning
  • Reinforcement learning
  • Robot astronauts

Fingerprint

Dive into the research topics of 'Reinforcement Learning-Based Attitude Stabilization Control for Robot Astronauts'. Together they form a unique fingerprint.

Cite this