Autonomous air combat decision-making of UAV based on parallel self-play reinforcement learning

Bo Li, Jingyi Huang, Shuangxia Bai, Zhigang Gan, Shiyang Liang, Neretin Evgeny, Shouwen Yao*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Citations (Scopus)

Abstract

Aiming at addressing the problem of manoeuvring decision-making in UAV air combat, this study establishes a one-to-one air combat model, defines missile attack areas, and uses the non-deterministic policy Soft-Actor-Critic (SAC) algorithm in deep reinforcement learning to construct a decision model to realize the manoeuvring process. At the same time, the complexity of the proposed algorithm is calculated, and the stability of the closed-loop system of air combat decision-making controlled by neural network is analysed by the Lyapunov function. This study defines the UAV air combat process as a gaming process and proposes a Parallel Self-Play training SAC algorithm (PSP-SAC) to improve the generalisation performance of UAV control decisions. Simulation results have shown that the proposed algorithm can realize sample sharing and policy sharing in multiple combat environments and can significantly improve the generalisation ability of the model compared to independent training.

Original languageEnglish
Pages (from-to)64-81
Number of pages18
JournalCAAI Transactions on Intelligence Technology
Volume8
Issue number1
DOIs
Publication statusPublished - Mar 2023

Keywords

  • SAC algorithm
  • UAV
  • air combat decision
  • deep reinforcement learning
  • parallel self-play

Fingerprint

Dive into the research topics of 'Autonomous air combat decision-making of UAV based on parallel self-play reinforcement learning'. Together they form a unique fingerprint.

Cite this