TY - JOUR
T1 - Brain-Inspired Motion Learning in Recurrent Neural Network with Emotion Modulation
AU - Huang, Xiao
AU - Wu, Wei
AU - Qiao, Hong
AU - Ji, Yidao
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2018/12
Y1 - 2018/12
N2 - Based on basic emotion modulation theory and the neural mechanisms of generating complex motor patterns, we introduce a novel emotion-modulated learning rule to train a recurrent neural network, which enables a complex musculoskeletal arm and a robotic arm to perform goal-directed tasks with high accuracy and learning efficiency. Specifically, inspired by the fact that emotions can modulate the process of learning and decision making through neuromodulatory system, we present a model of emotion generation and modulation to adjust the parameters of learning adaptively, including the reward prediction error, the speed of learning, and the randomness in action selection. Additionally, we use Oja learning rule to adjust the recurrent weights in delayed-reinforcement tasks, which outperforms the Hebbian update rule in terms of stability and accuracy. In the experimental section, we use a musculoskeletal model of the human upper limb and a robotic arm to perform goal-directed tasks through trial-and-reward learning, respectively. The results show that emotion-based methods are able to control the arm with higher accuracy and a faster learning rate. Meanwhile, emotional Oja agent is superior to emotional Hebbian one in term of performance.
AB - Based on basic emotion modulation theory and the neural mechanisms of generating complex motor patterns, we introduce a novel emotion-modulated learning rule to train a recurrent neural network, which enables a complex musculoskeletal arm and a robotic arm to perform goal-directed tasks with high accuracy and learning efficiency. Specifically, inspired by the fact that emotions can modulate the process of learning and decision making through neuromodulatory system, we present a model of emotion generation and modulation to adjust the parameters of learning adaptively, including the reward prediction error, the speed of learning, and the randomness in action selection. Additionally, we use Oja learning rule to adjust the recurrent weights in delayed-reinforcement tasks, which outperforms the Hebbian update rule in terms of stability and accuracy. In the experimental section, we use a musculoskeletal model of the human upper limb and a robotic arm to perform goal-directed tasks through trial-and-reward learning, respectively. The results show that emotion-based methods are able to control the arm with higher accuracy and a faster learning rate. Meanwhile, emotional Oja agent is superior to emotional Hebbian one in term of performance.
KW - Brain-inspired model
KW - emotion
KW - motion learning
KW - recurrent neural network (RNN)
UR - http://www.scopus.com/inward/record.url?scp=85047990661&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2018.2843563
DO - 10.1109/TCDS.2018.2843563
M3 - Article
AN - SCOPUS:85047990661
SN - 2379-8920
VL - 10
SP - 1153
EP - 1164
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
IS - 4
M1 - 8371307
ER -