TY - GEN
T1 - Motion control of non-holonomic constrained mobile robot using deep reinforcement learning
AU - Gao, Rui
AU - Gao, Xueshan
AU - Liang, Peng
AU - Han, Feng
AU - Lan, Bingqing
AU - Li, Jingye
AU - Li, Jian
AU - Li, Simin
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - For the motion control problem of non-holonomic constrained mobile robots, a point stabilization kinematic control law for mobile robot based on deep reinforcement learning is proposed. Firstly, a kinematic model of mobile robot is constructed to build memory for deep reinforcement learning, including the current state of the robot, the control action, the reward and the next state of the robot, which is generated through the connection between mobile robot and environment. Then, value network parameters in the real-time network are updated by a loss function, which is composed of a state-action value in current moment came from the value network of real-time network and a target value, the state-action value of next moment generated by the value network in target network. Next, the parameters of policy network of real-time network are updated according to the state-action value generated by value network of the real-time network in current moment. Finally, the parameters in the real-time network are weighted and averaged with the parameters in the target network, so the parameters of target network are updated to control mobile robot to stabilize with desired point. The simulation and experiment results show that the control algorithm based on deep reinforcement learning could effectively realize the point stabilization control of nonholonomic mobile robots.
AB - For the motion control problem of non-holonomic constrained mobile robots, a point stabilization kinematic control law for mobile robot based on deep reinforcement learning is proposed. Firstly, a kinematic model of mobile robot is constructed to build memory for deep reinforcement learning, including the current state of the robot, the control action, the reward and the next state of the robot, which is generated through the connection between mobile robot and environment. Then, value network parameters in the real-time network are updated by a loss function, which is composed of a state-action value in current moment came from the value network of real-time network and a target value, the state-action value of next moment generated by the value network in target network. Next, the parameters of policy network of real-time network are updated according to the state-action value generated by value network of the real-time network in current moment. Finally, the parameters in the real-time network are weighted and averaged with the parameters in the target network, so the parameters of target network are updated to control mobile robot to stabilize with desired point. The simulation and experiment results show that the control algorithm based on deep reinforcement learning could effectively realize the point stabilization control of nonholonomic mobile robots.
KW - Deep reinforcement learning
KW - Mobile robot
KW - Non-holonomic constrained
KW - Point stabilization
UR - http://www.scopus.com/inward/record.url?scp=85073229954&partnerID=8YFLogxK
U2 - 10.1109/ICARM.2019.8834284
DO - 10.1109/ICARM.2019.8834284
M3 - Conference contribution
AN - SCOPUS:85073229954
T3 - 2019 4th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2019
SP - 348
EP - 353
BT - 2019 4th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 4th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2019
Y2 - 3 July 2019 through 5 July 2019
ER -