TY - GEN
T1 - Improving learning efficiency of recurrent neural network through adjusting weights of all layers in a biologically-inspired framework
AU - Huang, Xiao
AU - Wu, Wei
AU - Yin, Peijie
AU - Qiao, Hong
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - Brain-inspired models have become a focus in artificial intelligence field. As a biologically plausible network, the recurrent neural network in reservoir computing framework has been proposed as a popular model of cortical computation because of its complicated dynamics and highly recurrent connections. To train this network, unlike adjusting only readout weights in liquid computing theory or changing only internal recurrent weights, inspired by global modulation of human emotions on cognition and motion control, we introduce a novel reward-modulated Hebbian learning rule to train the network by adjusting not only the internal recurrent weights but also the input connected weights and readout weights together, with solely delayed, phasic rewards. Experiment results show that the proposed method can train a recurrent neural network in near-chaotic regime to complete the motion control and working-memory tasks with higher accuracy and learning efficiency.
AB - Brain-inspired models have become a focus in artificial intelligence field. As a biologically plausible network, the recurrent neural network in reservoir computing framework has been proposed as a popular model of cortical computation because of its complicated dynamics and highly recurrent connections. To train this network, unlike adjusting only readout weights in liquid computing theory or changing only internal recurrent weights, inspired by global modulation of human emotions on cognition and motion control, we introduce a novel reward-modulated Hebbian learning rule to train the network by adjusting not only the internal recurrent weights but also the input connected weights and readout weights together, with solely delayed, phasic rewards. Experiment results show that the proposed method can train a recurrent neural network in near-chaotic regime to complete the motion control and working-memory tasks with higher accuracy and learning efficiency.
UR - http://www.scopus.com/inward/record.url?scp=85031001526&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7965944
DO - 10.1109/IJCNN.2017.7965944
M3 - Conference contribution
AN - SCOPUS:85031001526
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 873
EP - 879
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
Y2 - 14 May 2017 through 19 May 2017
ER -