TY - JOUR
T1 - Robust Motion Learning for Musculoskeletal Robots Based on a Recurrent Neural Network and Muscle Synergies
AU - Chen, Jiahao
AU - Wu, Yaxiong
AU - Yao, Chaojing
AU - Huang, Xiao
N1 - Publisher Copyright:
IEEE
PY - 2024
Y1 - 2024
N2 - Musculoskeletal robots with human-like joints, muscles, and actuation mechanisms are characterized by exceptional dexterity, compliance, and versatility. However, existing reinforcement learning methods for such robots rely on precise and sufficient state observation, rendering them vulnerable to perturbations. To address this limitation, this paper proposes a robust motion learning method based on a recurrent neural network (RNN) and muscle synergy. First, the proposed method utilizes task-joint-muscle space states to create an RNN-based neuromuscular controller. Furthermore, a motion learning method with a synergistic constraint of muscles is developed. Additionally, theoretical analysis confirms that the RNN-based controller is more robust to perturbations of state observation than a Multilayer Perceptron (MLP) based controller. The proposed method is evaluated on a simulated musculoskeletal robot and demonstrates superior robustness to other MLP-based reinforcement learning methods. Furthermore, the proposed method is also validated on a musculoskeletal robot hardware system, indicating its potential for real-world applications. Note to Practitioners—Musculoskeletal robots have shown promising potential in various applications, but their development and application have been restricted by the limited robustness of existing reinforcement learning methods. These methods work well in simulation with precise and sufficient state observation, but exhibit a considerable degradation of performance in real-world environments with perturbed and insufficient state observation. In this article, we propose a novel method to improve the robustness of motion learning for musculoskeletal robots using a recurrent neural network and muscle synergy. The proposed method is theoretically and experimentally validated to perform well not only under precise and sufficient state observation but also in the presence of perturbed and insufficient state observation. Our results demonstrate the effectiveness of the proposed method and motivate the application of reinforcement learning methods to musculoskeletal robots. This research contributes to the advancement of robust motion learning for musculoskeletal robots and paves the way for wider adoption in real-world applications.
AB - Musculoskeletal robots with human-like joints, muscles, and actuation mechanisms are characterized by exceptional dexterity, compliance, and versatility. However, existing reinforcement learning methods for such robots rely on precise and sufficient state observation, rendering them vulnerable to perturbations. To address this limitation, this paper proposes a robust motion learning method based on a recurrent neural network (RNN) and muscle synergy. First, the proposed method utilizes task-joint-muscle space states to create an RNN-based neuromuscular controller. Furthermore, a motion learning method with a synergistic constraint of muscles is developed. Additionally, theoretical analysis confirms that the RNN-based controller is more robust to perturbations of state observation than a Multilayer Perceptron (MLP) based controller. The proposed method is evaluated on a simulated musculoskeletal robot and demonstrates superior robustness to other MLP-based reinforcement learning methods. Furthermore, the proposed method is also validated on a musculoskeletal robot hardware system, indicating its potential for real-world applications. Note to Practitioners—Musculoskeletal robots have shown promising potential in various applications, but their development and application have been restricted by the limited robustness of existing reinforcement learning methods. These methods work well in simulation with precise and sufficient state observation, but exhibit a considerable degradation of performance in real-world environments with perturbed and insufficient state observation. In this article, we propose a novel method to improve the robustness of motion learning for musculoskeletal robots using a recurrent neural network and muscle synergy. The proposed method is theoretically and experimentally validated to perform well not only under precise and sufficient state observation but also in the presence of perturbed and insufficient state observation. Our results demonstrate the effectiveness of the proposed method and motivate the application of reinforcement learning methods to musculoskeletal robots. This research contributes to the advancement of robust motion learning for musculoskeletal robots and paves the way for wider adoption in real-world applications.
KW - Learning systems
KW - Muscles
KW - Musculoskeletal robots
KW - Musculoskeletal system
KW - Recurrent neural networks
KW - Reinforcement learning
KW - Robots
KW - Robustness
KW - muscle synergy
KW - recurrent neural network
KW - robustness
UR - http://www.scopus.com/inward/record.url?scp=85189326961&partnerID=8YFLogxK
U2 - 10.1109/TASE.2024.3379247
DO - 10.1109/TASE.2024.3379247
M3 - Article
AN - SCOPUS:85189326961
SN - 1545-5955
SP - 1
EP - 16
JO - IEEE Transactions on Automation Science and Engineering
JF - IEEE Transactions on Automation Science and Engineering
ER -