Model-Based Reinforcement Learning: World Models

Sebastian Dittert
Analytics Vidhya
Published in
12 min readDec 30, 2020

--

The image of the world around us, which we carry in our head, is just a model. Nobody in his head imagines all the world, government or country. He has only selected concepts, and relationships between them, and uses those to represent the real system. (Forrester 1971)

In this article, I want to introduce and write about the paper World Models by David Ha and Jürgen Schmidhuber.

Motivation

In our daily life, we are confronted with tons of information from our world around us streaming in through our different senses. Since we are not able to process everything in detail that we see, smell, feel, or hear, our brain learns abstract representations. These representations cover spatial and temporal aspects and help us to navigate and interact in our world.

Based on these representations we build our own model of the world surrounding us. Important to know is that for each person this world is different because of various and diverse experiences, feelings, and situations that each person has lived through.

However, for all of us, these models that we create subconsciously help us significantly in our daily life. Thus in a physical way of how we move around and interact with the environment (reflexes) but also mentally giving us hints on how things could work…

--

--

Sebastian Dittert
Analytics Vidhya

Ph.D. student at UPF Barcelona for Deep Reinforcement Learning