跳到主要导航 跳到搜索 跳到主要内容

Reinforcement Learning-Based Pathfinding for Multiple UAVs Facing Abrupt Hazardous Areas

  • Qizhen Wu
  • , Lei Chen*
  • , Kexin Liu
  • , Jinhu Lü
  • *此作品的通讯作者

科研成果: 期刊稿件文章同行评审

摘要

Planning feasible paths for multiple uncrewed aerial vehicles (UAVs) amidst abrupt hazardous areas is a critical safety challenge, where existing methods often lack safety guarantees and uncertainty handling. To address this, we propose a novel multi-agent reinforcement learning (MARL) approach for the UAV pathfinding problem. Our method ensures rapid responsiveness and adherence to safety constraints through the integration of a control barrier function, guaranteeing safe replanning even during sudden route changes. To overcome the potential inefficiency of purely reactive safety, we introduce a probabilistic neural network that quantifies hazard uncertainty, enhancing the anticipation of sudden dangers. Finally, to utilize swarm intelligence for mutual risk avoidance, the approach incorporates neighbors’ observations using a proximity-weighted mean-field mechanism, allowing each UAV to consider the impact of this aggregated information in its planning. Extensive simulations show that our method achieves a planning success rate surpassing 90% in transient environments, outperforming traditional planners and other MARL baselines. Real-world experiments further validate the approach’s adaptability, demonstrating its practical value for safety-critical missions.

源语言英语
页(从-至)4848-4860
页数13
期刊IEEE Transactions on Automation Science and Engineering
23
DOI
出版状态已出版 - 2026
已对外发布

指纹

探究 'Reinforcement Learning-Based Pathfinding for Multiple UAVs Facing Abrupt Hazardous Areas' 的科研主题。它们共同构成独一无二的指纹。

引用此