Abstract
In this paper, we propose a hierarchical framework for multi-agent systems to enhance cooperative tasks in dynamic environments. Accomplishing cooperative tasks can be challenging in dynamic environments. Reinforcement learning is a popular approach in this field, enabling agents to make real-time decisions. However, large state and action spaces often lead to poor performance, such as slow convergence and suboptimal policies. To address this issue, we utilize a hierarchical framework. Long-horizon and complicated tasks are decomposed into multiple subtasks. At the low-level, each subtask has a corresponding decision-making model, trained using the Soft Actor-Critic reinforcement learning algorithm. Additionally, a high-level component is introduced to determine which subtask to tackle at any given time. We discuss our method in the context of the popular hunting problem involving pursuers and an evader. Simulation demonstrates the efficacy and feasibility of our method in the hunting problem environment setting.
Original language | English |
---|---|
Pages (from-to) | 480-485 |
Number of pages | 6 |
Journal | Proceedings of the IEEE International Conference on Cybernetics and Intelligent Systems, CIS |
Issue number | 2024 |
DOIs | |
Publication status | Published - 2024 |
Event | 11th IEEE International Conference on Cybernetics and Intelligent Systems and 11th IEEE International Conference on Robotics, Automation and Mechatronics, CIS-RAM 2024 - Hangzhou, China Duration: 8 Aug 2024 → 11 Aug 2024 |
Keywords
- cooperative systems and control
- deep reinforcement learning
- multi-agent systems