Abstract
Robotic astronauts could play a crucial role in long-term duty and on-orbit experiments aboard space stations in future missions. Achieving attitude stabilization is critical for performing precision tasks. However, for robotic astronauts with high degrees of freedom, intricate motion coupling, and rich interactions, attitude control remains a significant challenge. A reinforcement learning-based framework is proposed to overcome this limitation, integrating curriculum learning with an Asymmetric Actor-Critic architecture and Proximal Policy Optimization (PPO). The approach is trained and validated within NVIDIA Isaac Gym, a high-performance GPU-accelerated physics simulation platform. The results demonstrate that the proposed policy enables rapid convergence of the robot's linear velocity, angular velocity, and attitude deviation, ensuring stable performance. Additionally, it shows strong generalization and robustness across varying initial conditions and curriculum levels. In conclusion, this strategy successfully achieves attitude stabilization control for robotic astronauts in space station environments, providing technical support for future on-orbit missions.
| Original language | English |
|---|---|
| Pages (from-to) | 2284-2289 |
| Number of pages | 6 |
| Journal | IFAC-PapersOnLine |
| Volume | 59 |
| Issue number | 20 |
| DOIs | |
| Publication status | Published - 1 Aug 2025 |
| Event | 23th IFAC Symposium on Automatic Control in Aerospace, ACA 2025 - Harbin, China Duration: 2 Aug 2025 → 6 Aug 2025 |
Keywords
- Attitude stabilization control
- Curriculum learning
- Reinforcement learning
- Robot astronauts
Fingerprint
Dive into the research topics of 'Reinforcement Learning-Based Attitude Stabilization Control for Robot Astronauts'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver