What you have to know before creating a container of a redis docker image
I got up this morning and thought “I’ll create a container with a redis docker image and start a little database project using lots of different containers”. Now, 5 hours later, I think I’ve worked out how to create a container with a redis docker image.
Not that I’m sure, mind you. A lot of people out there know a lot more about this than I do, so if i’m totally wrong, just let me know.
The first step is to install docker. This is just a simple apt-get or dnf command on an ubuntu/debian/fedora box. That should take about 2 minutes.
The next step is to download a redis image. You can go here and you can see the command at the top right: docker pull redis. So you do that. That should take about a minute.
Then you scroll down the page and you find
docker run --name some-redis -d redis
So you do that. That takes about a minute. And you end up with a pretty useless container. Why is it pretty useless? Well, you won’t be able to use redis-cli to check if it’s working and you’ll have to do some work to backup the data you store on it.
No really, don’t do that. There’s a vanishingly small chance that that will do what you want. Here instead is, to the best of my knowledge, what you probably want to do:
what you want to do
docker volume create my-data-volume # or any name
docker run -d -v my-data-volume:/data --name my-redis-container -p 6379:6379 redis
Ok. Let’s have a look at what that monstrosity does and why you need it.
“run” creates a new container based on the image named in the last word of the command. It’s important that it’s the last word, otherwise the command will silently fail. It then starts the container and runs a command on the container. Here we haven’t given a command, so it won’t do that part.
In this case you could replace the run command with the following two commands:
# docker run -d -v my-data-volume:/data --name my-redis-container -p 6379:6379 redis
# can be replaced with
docker create -v my-data-volume:/data --name my-redis-container -p 6379:6379 redis
docker start my-redis-container
which has the advantage of being clearer, if longer.
“run” takes a -d option. If you don’t stipulate -d your terminal will end up inside the container and you won’t find a way out without backgrounding the process (and nohup/disown etc. etc.). In other words: hassle.
“-v” allows you to give the name of the volume followed by a colon followed by the path the volume should be mounted to in the new container.
“ — name” says that the new container will be called my-redis-container.
“-p port_on_host:port_on_container” is necessary, so that you can check to see if everything’s working by using the redis-cli client.
Once you’ve done all this, you will have a redis container, open to the outside world on 6379 (the default redis port) and data will be stored persistently to a file on the harddrive of the host. You can find the file by entering:
$ docker volume inspect redis-data
# this will tell you the following (or similar):
and your redis data-file is stored under Mountpoint/dump.rdb
why you want to do this
Good. Now I’ve told you what I think you probably want to do, let me tell you why you want to do this.
If you got up this morning like I did, you probably regarded a docker container as a simple method to put an application (or a micro-service) in a box and move it from Host A to Host B and deploy it. This model falls apart when it comes to databases.
The problem is that if data was stored inside the container and the container developed a fault (a bad update, or someone changes a port), you would lose that data (of course, you could argue (and I wouldn’t stop you) that the whole point of containers is that you can snapshot them and revert to a previous working state, however consider a database that’s storing 1000 new pieces of information a second. Snapshotting here would be tricky).
So instead, docker created the possibility of mounting a directory on the localhost inside the container and additionally you can tell docker that this directory is a “volume”. This is a special name and means that the docker command line tool treats this directory differently. In particular, the docker command line tool will never delete a volume unless you specifically tell it to. It will also exclude volumes from your image snapshots.
This does however mean that it’s up to you to make sure your database file is safe. Maybe a cronjob, maybe some other more 21st-century solution.
When you want to move your container from Host A to Host B you’ll have to move your database file as well, and store it in the right place. I’ll give it a try sometime and see if it’s as easy as I imagine it could be.
I hope this helps someone.