TY - JOUR
T1 - A Penetration Method for UAV Based on Distributed Reinforcement Learning and Demonstrations
AU - Li, Kexv
AU - Wang, Yue
AU - Zhuang, Xing
AU - Yin, Hao
AU - Liu, Xinyu
AU - Li, Hanyu
N1 - Publisher Copyright:
© 2023 by the authors.
PY - 2023/4
Y1 - 2023/4
N2 - The penetration of unmanned aerial vehicles (UAVs) is an essential and important link in modern warfare. Enhancing UAV’s ability of autonomous penetration through machine learning has become a research hotspot. However, the current generation of autonomous penetration strategies for UAVs faces the problem of excessive sample demand. To reduce the sample demand, this paper proposes a combination policy learning (CPL) algorithm that combines distributed reinforcement learning and demonstrations. Innovatively, the action of the CPL algorithm is jointly determined by the initial policy obtained from demonstrations and the target policy in the asynchronous advantage actor-critic network, thus retaining the guiding role of demonstrations in the initial training. In a complex and unknown dynamic environment, 1000 training experiments and 500 test experiments were conducted for the CPL algorithm and related baseline algorithms. The results show that the CPL algorithm has the smallest sample demand, the highest convergence efficiency, and the highest success rate of penetration among all the algorithms, and has strong robustness in dynamic environments.
AB - The penetration of unmanned aerial vehicles (UAVs) is an essential and important link in modern warfare. Enhancing UAV’s ability of autonomous penetration through machine learning has become a research hotspot. However, the current generation of autonomous penetration strategies for UAVs faces the problem of excessive sample demand. To reduce the sample demand, this paper proposes a combination policy learning (CPL) algorithm that combines distributed reinforcement learning and demonstrations. Innovatively, the action of the CPL algorithm is jointly determined by the initial policy obtained from demonstrations and the target policy in the asynchronous advantage actor-critic network, thus retaining the guiding role of demonstrations in the initial training. In a complex and unknown dynamic environment, 1000 training experiments and 500 test experiments were conducted for the CPL algorithm and related baseline algorithms. The results show that the CPL algorithm has the smallest sample demand, the highest convergence efficiency, and the highest success rate of penetration among all the algorithms, and has strong robustness in dynamic environments.
KW - UAV penetration
KW - asynchronous advantage actor-critic
KW - demonstrations
KW - distributed reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85153771926&partnerID=8YFLogxK
U2 - 10.3390/drones7040232
DO - 10.3390/drones7040232
M3 - Article
AN - SCOPUS:85153771926
SN - 2504-446X
VL - 7
JO - Drones
JF - Drones
IS - 4
M1 - 232
ER -