Random curiosity-driven exploration in deep reinforcement learning

Jing Li, Xinxin Shi, Jiehao Li*, Xin Zhang, Junzheng Wang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

56 Citations (Scopus)

Abstract

Reinforcement learning (RL) depends on carefully engineering environment rewards. However, rewards from environments are extremely sparse for many RL tasks, challenging for the agent to learn skills and interact with the environment. One solution to this problem is to create intrinsic rewards for agents and to make rewards dense and more suitable for learning. Recent algorithms, such as curiosity-driven exploration, usually estimate the novelty of the next state through the prediction error of dynamics models. However, these methods are typically limited by the capacity of their dynamics models. In this paper, a random curiosity-driven model using deep reinforcement learning is proposed, which uses a target network with fixed weights to maintain the stability of dynamics models and create more suitable intrinsic rewards. We integrate the parametric exploration method for further promoting sufficient exploration. Besides, a deeper and more closely connected network is utilized for encoding the pixel images for policy-gradient. By comparing our method against the previous approaches in several environments, the experiments show that our method achieves state-of-the-art performance on most but not all of the Atari games.

Original languageEnglish
Pages (from-to)139-147
Number of pages9
JournalNeurocomputing
Volume418
DOIs
Publication statusPublished - 22 Dec 2020

Keywords

  • Curiosity-driven exploration
  • Deep reinforcement learning
  • Intrinsic rewards

Fingerprint

Dive into the research topics of 'Random curiosity-driven exploration in deep reinforcement learning'. Together they form a unique fingerprint.

Cite this