MRCDRL: Multi-robot coordination with deep reinforcement learning

Di Wang, Hongbin Deng*, Zhenhua Pan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

40 Citations (Scopus)

Abstract

This paper proposes a multi-robot cooperative algorithm based on deep reinforcement learning (MRCDRL). We use end-to-end methods to train directly from each robot-centered, relative perspective-generated image, and each robot's reward as the input. During training, it is not necessary to specify the target position and movement path of each robot. MRCDRL learns the actions of each robot by training the neural network. MRCDRL uses the neural network structure that was modified from the Duel neural network structure. In the Duel network structure, there are two streams that each represents the state value function and the state-dependent action advantage function, and the results of the two streams are merged. The proposed method can solve the resource competition problem on the one hand and can solve the static and dynamic obstacle avoidance problems between multi-robot in real time on the other hand. Our new MRCDRL algorithm has higher accuracy and robustness than DQN and DDQN and can be effectively applied to multi-robot collaboration.

Original languageEnglish
Pages (from-to)68-76
Number of pages9
JournalNeurocomputing
Volume406
DOIs
Publication statusPublished - 17 Sept 2020

Keywords

  • Cooperative control
  • Image processing
  • Machine learning

Fingerprint

Dive into the research topics of 'MRCDRL: Multi-robot coordination with deep reinforcement learning'. Together they form a unique fingerprint.

Cite this