Optimal Path-Planning of Nonholonomic Terrain Robots for Dynamic Obstacle Avoidance Using Single-Time Velocity Estimator and Reinforcement Learning Approach

Hamid Taghavifar, Bin Xu*, Leyla Taghavifar, Yechen Qin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

19 Citations (Scopus)

Abstract

A single-Time velocity estimator-based reinforcement learning (RL) algorithm, integrated with a chaotic metaheuristic optimization technique is proposed in this article for the optimal path-planning of the nonholonomic robots considering a moving/stationary obstacle avoidance strategy. The additional contribution of the present study is by employing the Terramechanics principles to incorporate the effects of wheel sinkage into the deformable terrain on the dynamics of the robot aiming to find the optimal compensating force/torque magnitude to sustain a robust and smooth motion. The designed systematic control-oriented system incorporates a cost function of weighted components associated with the target-Tracking and the obstacle avoidance. The designed velocity estimator contributes to the finite-state Markov decision process (MDP) in order to train the transition probabilities of the problem objectives. Based on the obtained results, the optimal solution for the Q-learning in terms of the adjusting factor for the minimized tracking error and obstacle collision risk propagation profiles is found at 0.22. The results further confirm the promising capacity of the proposed optimization-based RL algorithm for the collision avoidance control of the nonholonomic robots on deformable terrains.

Original languageEnglish
Article number8886591
Pages (from-to)159347-159356
Number of pages10
JournalIEEE Access
Volume7
DOIs
Publication statusPublished - 2019

Keywords

  • Mechatronics
  • artificial intelligence
  • path-planning
  • terramechanics

Fingerprint

Dive into the research topics of 'Optimal Path-Planning of Nonholonomic Terrain Robots for Dynamic Obstacle Avoidance Using Single-Time Velocity Estimator and Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this