Things You Don’t Need to Incorporate In a Docker Container
Save yourself development time and headaches with microservice architecture
Ah, docker. Leaving the excuse of “it works on my system,” at the door beside your Apple II and 3/4-inch floppies. Never has it been easier to spin up microservices that communicate to each other effectively.
But setting up docker can (sometimes) be a pain, especially if you’re doing more work than necessary. Here are 3 things your Docker container DOESN’T need.
1. User Accounts
Security experts are probably gonna be all over this article right now, but I will soon appease you folks, so hang tight.
I know, it can be a shock to enter the docker shell and see that — in fact — you are root. It breaks all the best practices for system administration to never be root.
But remember that a docker container is like a mini operating system. So if a bad actor gets into the docker container and ruins stuff you can always destroy the container and restart.
Note that this is not the same as running the container AS HOST ROOT. That you should not do unless you have a specific reason.
Now, for appeasing security people…yes, it might be a tad better to have user-specific actions within a docker container (docker users and host users are eloquently described in this RedHat article).
But try to keep it simple. If you’re just starting to develop a docker container, keep everything as root, then add the user permissions when you’re ready. Don’t add
sudo . It’s a nightmare to get right. Just stick to the
USER command and be on your way.
2. The Latest Version of Software
In fact, it’s better that you don’t try to upgrade anything within the docker container.
Keep semantic versioning consistent and analogous to the container operating system. For example, if you’re using an Ubuntu 18.04 image, you will probably need Python 3.7.
The reason for this is because a docker container should be thought of as a snapshot. If you automatically default to the latest and greatest (e.g. do a dist-upgrade as part of the build process), there may be changes to software you’re not aware of. For example, some C++ libraries might be deprecated in the latest version, while you were relying on them to be there!
The caveat to this is if there’s a security notice for the version of software you’re using. If you’re using OpenSSL and a security bulletin for your OpenSSL version is posted, make sure your docker image patches that! This might involve upgrading other packages as well, but make sure to freeze the versioning.
3. Shared Library Directories
Let’s say you’re dabbling with an npm project, and you’re loading up testing in docker. “Wait a minute,” you say, “I’m installing packages twice, once for local development, twice for the docker container. What if I shared the
Well, have at it, but you’ll run into trouble.
This is because any shared volume (once mounted) gets assigned root permissions so that docker is able to read it. That means if you want to install a new package from your host, you can’t because
package_modules isn’t writable by you.
Yes, you could do a massive
chmod permissions change to
package_modules but if the docker container modifies a package within
package_modules, you’ll have to recheck permissions again.
Instead, separate out the static requirements (requirements that won’t change very much, e.g. Python, Nginx) and the dynamic requirements (requirements that will change throughout the course of the program — e.g. python modules, node modules, your program code).
Static requirements can be
COPY'd in the Dockerfile since you’re planning on building and forgetting. Dynamic requirements can be mounted, but only volumes that support a host -> container flow (think, source code).
Otherwise, we’ll run into a major permission issue because docker will be overriding the permissions of the mounted volume. To make things safe, mount the volume as read-only.
Keep It Simple; Add as Needed
Writing a docker container is a lot like sending the Curiosity probe to play chess on Mars. It takes some finagling and some prayer to get right. The key is to keep it simple. Start with Dockerfile, then move on to docker-compose.
If your team is needing to update your microservices architecture, I can help with that. I have over 10 years development experience and can get our software tech stack to work in harmony. Head on over to https://damngood.tech to schedule a free consultation.
Each article takes time and I work to ensure teams find a lot of value from it. If find the articles helpful, you can support my work over at kofi: https://ko-fi.com/damngood/tiers.