The First Thing You Should Know When Learning About Docker Containers
If I could go back in time when I was learning docker I wish someone would have started with explaining to me the docker conceptual philosophy behind it before anything else
Most of the time, when someone explains docker to you, you leave the conversation or article thinking “Oh I get it, containers are like VMs but with less bloat and quicker to run because they run on top of your Linux OS”
That’s kind of true, you can use a docker container almost the same as you would a VM, and there are some good and useful public images reflecting this idea, such as Kali.
However that’s not what docker container design philosophy is and certainly not the best way to run them in a production environment or any server for that matter.
Docker containers are in essence isolated services, not VM replacements.
More than a technical definition, that’s a philosophical definition and docker has been designed around that idea. Hence why docker-compose refers to containers as services. So when designing your containers and servers around docker you should keep in mind the following design principles:
- The docker image contains everything that it needs to run as quickly as possible and nothing else! Use alpine based images whenever possible.
- Configuration of the container/service should, if feasible, be passed via environment variables on execution. This is to avoid having to map config files and edit them on the host server or a volume for persistence. If more complex and permanent configuration is needed, it may be a good idea to store this in your image and place it on a secure repository.
- Any code that runs in your container should be included in the image, if the code needs revision, update that image and simply pull it in the server from your repository to fetch the latest code. This would also allow you to easily keep the underlining image up to date, tested, and with the latest security updates.
- In most cases the only parts of the image that should be mapped to a host or volume is changeable data that you need to keep, such as databases, etc. Anything else that is okay to lose on restart is usually fine to leave inside the container (make sure to use overlay2 for best performance!)
Why do it this way on a production server? Because all the work that you do in your server should be done behind curtains and committed to a central repository and, if applicable, container registry. Any configuration, code or service that you apply to your servers should be done in a way that discourages fiddling and hotfixing on the spot to avoid snowflake configurations and untracked changes that are hard to replicate later.
Also, I believe that the configuration you have to do once you are in the server should be kept to a bare minimum and you shouldn’t need to include Dockerfiles, unnecessary mounts to data that does not change or configuration that changes very rarely.
Another good reason why implementing it this way would be a great idea is that if in the future you decide to do away with servers and place your containers in let’s say Kubernetes, then most of your work would already be done and migration wouldn’t be a massive pain to do.
Of course these principles are not dogmatic, different companies and individuals have different needs and sometimes a different approach may be more suitable and logic and good reasoning should dictate in such circumstances.
In order to implement this philosophy all the way through it is important that Developers and Operations work together and understand these concepts, otherwise implementation becomes very problematic.
This is why working in an organisation with a strong DevOps culture is important to create resilient, replicable and scrapable environments that can easily and quickly be recreated.