Random curiosity-driven exploration in deep reinforcement learning

Jing Li, Xinxin Shi, Jiehao Li*, Xin Zhang, Junzheng Wang

*此作品的通讯作者

科研成果: 期刊稿件文章同行评审

55 引用 (Scopus)

摘要

Reinforcement learning (RL) depends on carefully engineering environment rewards. However, rewards from environments are extremely sparse for many RL tasks, challenging for the agent to learn skills and interact with the environment. One solution to this problem is to create intrinsic rewards for agents and to make rewards dense and more suitable for learning. Recent algorithms, such as curiosity-driven exploration, usually estimate the novelty of the next state through the prediction error of dynamics models. However, these methods are typically limited by the capacity of their dynamics models. In this paper, a random curiosity-driven model using deep reinforcement learning is proposed, which uses a target network with fixed weights to maintain the stability of dynamics models and create more suitable intrinsic rewards. We integrate the parametric exploration method for further promoting sufficient exploration. Besides, a deeper and more closely connected network is utilized for encoding the pixel images for policy-gradient. By comparing our method against the previous approaches in several environments, the experiments show that our method achieves state-of-the-art performance on most but not all of the Atari games.

源语言英语
页(从-至)139-147
页数9
期刊Neurocomputing
418
DOI
出版状态已出版 - 22 12月 2020

指纹

探究 'Random curiosity-driven exploration in deep reinforcement learning' 的科研主题。它们共同构成独一无二的指纹。

引用此