In a very common perspective, RL is something which is always attracting people to work on. Efforts like gym making it more reachable, but still hardware is the big hurdle to cross. Availability of enough computing power or the time consumption to train the model is always be an issue for a starter.
Google Colaboratory provides that 12GB GPU support with continuous 12 hr runtime. For RL it requires to render the environment visuals. Here is sort of a tutorial to get over that issue & continue free coding.
Motive of this blog will be to use gym & gym[atari] on colab. For Deep Learning & other setup you may want to refer this article, it will also give you the basic understanding.
Most of the requirements of python packages are already fulfilled on colab. To run gym, 1st you have to install prerequisites like xvbf,opengl & other python-dev packages:
!apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg- dev xvfb libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig
Now for rendering environment I prefer to use pyvirtualdisplay, so to fulfill that:
!pip install pyvirtualdisplay
!pip install piglet
To activate virtual display we need to run a script once for training an agent, as follows:
Moving on to gym requirements, gym is already installed but not with atari game environments, to get that:
!pip install gym
!pip install “gym[atari]"
Now you are ready to run & render the gym environments. Check the complete notebook:
Its a simple gym example to start with. For further more, you can check this git repo,I have been sharing links of colab-notebook with both gym with & without atari.
It’s a very efficient way to experience & contribute in reinforcement learning. Still the limited runtime of this platform can bother, but to start it’s more than enough.Continue free coding!!