A Hierarchical Framework for Cooperative Tasks in Multi-agent Systems

Yuanning Zhu*, Qingkai Yang, Daiying Tian, Hao Fang

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

In this paper, we propose a hierarchical framework for multi-agent systems to enhance cooperative tasks in dynamic environments. Accomplishing cooperative tasks can be challenging in dynamic environments. Reinforcement learning is a popular approach in this field, enabling agents to make real-time decisions. However, large state and action spaces often lead to poor performance, such as slow convergence and suboptimal policies. To address this issue, we utilize a hierarchical framework. Long-horizon and complicated tasks are decomposed into multiple subtasks. At the low-level, each subtask has a corresponding decision-making model, trained using the Soft Actor-Critic reinforcement learning algorithm. Additionally, a high-level component is introduced to determine which subtask to tackle at any given time. We discuss our method in the context of the popular hunting problem involving pursuers and an evader. Simulation demonstrates the efficacy and feasibility of our method in the hunting problem environment setting.

Keywords

  • cooperative systems and control
  • deep reinforcement learning
  • multi-agent systems

Cite this