跳到主要导航 跳到搜索 跳到主要内容

MRCDRL: Multi-robot coordination with deep reinforcement learning

科研成果: 期刊稿件文章同行评审

摘要

This paper proposes a multi-robot cooperative algorithm based on deep reinforcement learning (MRCDRL). We use end-to-end methods to train directly from each robot-centered, relative perspective-generated image, and each robot's reward as the input. During training, it is not necessary to specify the target position and movement path of each robot. MRCDRL learns the actions of each robot by training the neural network. MRCDRL uses the neural network structure that was modified from the Duel neural network structure. In the Duel network structure, there are two streams that each represents the state value function and the state-dependent action advantage function, and the results of the two streams are merged. The proposed method can solve the resource competition problem on the one hand and can solve the static and dynamic obstacle avoidance problems between multi-robot in real time on the other hand. Our new MRCDRL algorithm has higher accuracy and robustness than DQN and DDQN and can be effectively applied to multi-robot collaboration.

源语言英语
页(从-至)68-76
页数9
期刊Neurocomputing
406
DOI
出版状态已出版 - 17 9月 2020

指纹

探究 'MRCDRL: Multi-robot coordination with deep reinforcement learning' 的科研主题。它们共同构成独一无二的指纹。

引用此