Abstract
This paper proposes a multi-robot cooperative algorithm based on deep reinforcement learning (MRCDRL). We use end-to-end methods to train directly from each robot-centered, relative perspective-generated image, and each robot's reward as the input. During training, it is not necessary to specify the target position and movement path of each robot. MRCDRL learns the actions of each robot by training the neural network. MRCDRL uses the neural network structure that was modified from the Duel neural network structure. In the Duel network structure, there are two streams that each represents the state value function and the state-dependent action advantage function, and the results of the two streams are merged. The proposed method can solve the resource competition problem on the one hand and can solve the static and dynamic obstacle avoidance problems between multi-robot in real time on the other hand. Our new MRCDRL algorithm has higher accuracy and robustness than DQN and DDQN and can be effectively applied to multi-robot collaboration.
Original language | English |
---|---|
Pages (from-to) | 68-76 |
Number of pages | 9 |
Journal | Neurocomputing |
Volume | 406 |
DOIs | |
Publication status | Published - 17 Sept 2020 |
Keywords
- Cooperative control
- Image processing
- Machine learning