Advanced Exploration: Curiosity-driven Exploration

Sebastian Dittert
Analytics Vidhya
Published in
8 min readSep 10, 2020

--

This article is part of a series about advanced exploration strategies in RL:

Motivation

For a Reinforcement Learning Agent to be successful it must frequently encounter a reward signal. Based on this feedback the agent adapts its behavior, trying to maximize the overall return received in an episode. However, in some environments, those reward feedbacks can be extremely sparse or absent altogether. What makes it very difficult for the agent to learn and ending up in a goal state, solving the proposed task.

Most, if not all regular RL approaches fail on such tasks that offer very sparse or no reward signals. For those settings advanced exploration strategies help the agent to succeed.

In this article, we want to cover curiosity-driven agents. Those agents have an intrinsic curiosity that helps them explore the environment successfully without any need for extrinsic reward feedback.

But what is curiosity and how does it help our agents? Motivation/curiosity has been used to explain the need to explore the environment and discover novel states. If you think of us humans, imagine a young child or a baby…

--

--

Sebastian Dittert
Analytics Vidhya

Ph.D. student at UPF Barcelona for Deep Reinforcement Learning