Optimizing Constrained Guidance Policy with Minimum Overload Regularization

Weilin Luo, Lei Chen, Kexin Liu, Haibo Gu, Jinhu Lu*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Using reinforcement learning (RL) algorithm to optimize guidance law can address non-idealities in complex environment. However, the optimization is difficult due to huge state-Action space, unstable training, and high requirements on expertise. In this paper, the constrained guidance policy of a neural guidance system is optimized using improved RL algorithm, which is motivated by the idea of traditional model-based guidance method. A novel optimization objective with minimum overload regularization is developed to restrain the guidance policy directly from generating redundant missile maneuver. Moreover, a bi-level curriculum learning is designed to facilitate the policy optimization. Experiment results show that the proposed minimum overload regularization can reduce the vertical overloads of missile significantly, and the bi-level curriculum learning can further accelerate the optimization of guidance policy.

Original languageEnglish
Pages (from-to)2994-3005
Number of pages12
JournalIEEE Transactions on Circuits and Systems I: Regular Papers
Volume69
Issue number7
DOIs
Publication statusPublished - 1 Jul 2022

Keywords

  • Missile guidance
  • curriculum learning
  • minimum overload regularization
  • reinforcement learning

Fingerprint

Dive into the research topics of 'Optimizing Constrained Guidance Policy with Minimum Overload Regularization'. Together they form a unique fingerprint.

Cite this