TY - JOUR
T1 - Hierarchical Reinforcement Learning for Swarm Confrontation With High Uncertainty
AU - Wu, Qizhen
AU - Liu, Kexin
AU - Chen, Lei
AU - Lu, Jinhu
N1 - Publisher Copyright:
© 2004-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - In swarm robotics, confrontation including the pursuit-evasion game is a key scenario. High uncertainty caused by unknown opponents' strategies, dynamic obstacles, and insufficient training complicates the action space into a hybrid decision process. Although the deep reinforcement learning method is significant for swarm confrontation since it can handle various sizes, as an end-to-end implementation, it cannot deal with the hybrid process. Here, we propose a novel hierarchical reinforcement learning approach consisting of a target allocation layer, a path planning layer, and the underlying dynamic interaction mechanism between the two layers, which indicates the quantified uncertainty. It decouples the hybrid process into discrete allocation and continuous planning layers, with a probabilistic ensemble model to quantify the uncertainty and regulate the interaction frequency adaptively. Furthermore, to overcome the unstable training process introduced by the two layers, we design an integration training method including pre-training and cross-training, which enhances the training efficiency and stability. Experiment results in both comparison, ablation, and real-robot studies validate the effectiveness and generalization performance of our proposed approach. In our defined experiments with twenty to forty agents, the win rate of the proposed method reaches around ninety percent, outperforming other traditional methods Note to Practitioners - With artificial intelligence rapidly developing, robots will play a significant role in the future. Especially, the swarm formed by many robots holds promising potential in civil and military applications. Promoting the swarm into games or battles is rather riveting. The reinforcement learning method provides a plausible solution to realize the battle of robotic swarms. There are still some issues that need to be addressed. On one hand, we focus on the uncertainty caused by the battlefield nature and the environment which limits our ability for the implementation of swarms. On the other hand, we solve the problem that the decision process combined with commands and actions is a hybrid system, which cannot be directly reflected in the confrontation of swarms. Overall, our approaches throw light on artificial general intelligence and also reveal a solution to interpretable intelligence.
AB - In swarm robotics, confrontation including the pursuit-evasion game is a key scenario. High uncertainty caused by unknown opponents' strategies, dynamic obstacles, and insufficient training complicates the action space into a hybrid decision process. Although the deep reinforcement learning method is significant for swarm confrontation since it can handle various sizes, as an end-to-end implementation, it cannot deal with the hybrid process. Here, we propose a novel hierarchical reinforcement learning approach consisting of a target allocation layer, a path planning layer, and the underlying dynamic interaction mechanism between the two layers, which indicates the quantified uncertainty. It decouples the hybrid process into discrete allocation and continuous planning layers, with a probabilistic ensemble model to quantify the uncertainty and regulate the interaction frequency adaptively. Furthermore, to overcome the unstable training process introduced by the two layers, we design an integration training method including pre-training and cross-training, which enhances the training efficiency and stability. Experiment results in both comparison, ablation, and real-robot studies validate the effectiveness and generalization performance of our proposed approach. In our defined experiments with twenty to forty agents, the win rate of the proposed method reaches around ninety percent, outperforming other traditional methods Note to Practitioners - With artificial intelligence rapidly developing, robots will play a significant role in the future. Especially, the swarm formed by many robots holds promising potential in civil and military applications. Promoting the swarm into games or battles is rather riveting. The reinforcement learning method provides a plausible solution to realize the battle of robotic swarms. There are still some issues that need to be addressed. On one hand, we focus on the uncertainty caused by the battlefield nature and the environment which limits our ability for the implementation of swarms. On the other hand, we solve the problem that the decision process combined with commands and actions is a hybrid system, which cannot be directly reflected in the confrontation of swarms. Overall, our approaches throw light on artificial general intelligence and also reveal a solution to interpretable intelligence.
KW - artificial intelligence
KW - decision uncertainty
KW - deep reinforcement learning
KW - robotic confrontation
KW - Swarm
UR - http://www.scopus.com/inward/record.url?scp=85208683123&partnerID=8YFLogxK
U2 - 10.1109/TASE.2024.3487219
DO - 10.1109/TASE.2024.3487219
M3 - Article
AN - SCOPUS:85208683123
SN - 1545-5955
JO - IEEE Transactions on Automation Science and Engineering
JF - IEEE Transactions on Automation Science and Engineering
ER -