Development with Docker: An approach š¦
Not using Docker yet? Youāve read tons of articles about docker pros/cons but not yet fully sold, because your daily job isnāt ready for an infrastructure switch (and since they havenāt tried it yet, neither do you). Your teammates and people from other areas started exploring it, and you understand the general idea but still wonder how to integrate it with your usual tools, while in the meantime, youāre still struggling with bugs in the CI (builds the project in Linux) that arenāt happening in your Mac, and the PO tells you that thereās a new freelance developer in the project that needs your help setting up the dev environment in Windows. And in the middle of all of this, you hear the architectās announcement:
I was in that spot some time ago, and decided to give it a shot on a side project, before we make the move for real. Iāve been really impressed by the flexibility, portability and overall experience with Docker and I want to share some of my findings.
Some background first!
I am a JavaScript developer, Iāve been working as a Front End developer for some time and most recently with the arrival of Node.js, Iāve been switching into more of a Full Stack developer role. I really love both worlds, tailor the User Experience in both look and feel and optimized performance, while doing my best to keep the code clean, DRY and human-readable.
Are you wondering what development workflow Iām going to talk about? Of course, JavaScript. But if you code in another language, donāt worry, as most concepts are easily applicable to any other language. (In fact, Iāve read that you can containerize ASP.NET apps using .NET Core Docker images. Isnāt that neat?) So no excuse about OS!
Another disclaimer: this is a compilation of what Iāve done so far and by no means this is the one true way to work with Docker. I know thereās a lot of people already working in containers with much more experience. If youāre one of them, please call me out and letās keep this post (and myself) updated.
And no, Docker doesnāt pay me to write this. I can only wishā¦
Consider this
Breaking your application into containers should be considered in order to gain all of the benefits of having a decentralized architecture (many teams working on many parts of the app) without all of the pain points of communication between languages, operating systems, proxies, services, etc. Having the environment abstracted away helps focusing on real tasks, which should be of course, coding your application. If you're concerned about being in control of performance, having the ability to scale (or auto-scale) on demand and getting the most out of your server solution, Docker might help you get there. As it happened to me as a developer, getting to understand how to scale an application is the key to getting the boost to your career and the meaning of life and stuff.
Iāll just put some links over here in case youāre missing concepts or want to know more about certain stuff. Donāt worry, Iāll wait for you.
- Installing Docker CE on Mac / on Windows
- Installing Docker Compose
- Overview of some Docker concepts
- Docker development best practices
- A nice article on microservices
- A nice intro about dockerizing a Node.js app
- Iām using Nuxt.js as my framework of choice. If itās yours too, great!
Project structure
Let's start with some assumptions:
- You want to have an exact replica of your production environment in your local environment. (and in your CI/CD solution)
- You want to keep all the goodness of a local development environment. (in JS world that means file watching, hot/live reloading, sourcemaps, transpiling, package management, environment configuration files, etc)
- You want to be able to store your files in your source control solution (i.e. git) and keep out everything that might not need to be versioned. (build files, dependencies, node_modules, etc)
- You want every new developer that joins your team to be able to set the environment as quick and painless as possible, regardless of their OS of choice.
- You want to deploy to production with the same ease as #4. I.e. you want your project to be as portable as possible.
Based on the above, and since I can't cover all use cases, I'll just show what works for me: a simple web server in nginx and a node app. The folder structure would look somewhat like this:
project-root/
| nginx/
| | Dockerfile
| | nginx.conf
| node/
| | Dockerfile__local <---- We'll go about this later on
| | src/
| | | app/ <---- Plain ol' Nuxt.js app
| | | nuxt.config.js
| | | package.json
| docker-compose.yml
Notice the Dockerfile
and docker-compose.yml
files. These will be your default settings for both your containers and your overall environment.
# project-root/docker-compose.ymlversion: "3"
services:
node:
# This binds your container to a virtual network
# that can access all containers belonging to it
networks:
- mynetwork
build:
context: "./node"
dockerfile: Dockerfile
# This is the port Nuxt.js exposes. It is exposed
# to all containers in the network. You can customize it
expose: [ "3000" ]
volumes:
- static:/usr/static
nginx:
networks:
- mynetwork
build:
context: "./nginx"
dockerfile: Dockerfile
args:
confname: "nginx.conf"
# This binds your webserver port to the actual
# host port. As you can tell, this is for prod
# configuration, as you might not want to use
# port 80 in your local development env
ports:
- "80:80"
volumes:
- static:/usr/share/nginx/html
# Define the network. Not mandatory, but nice to have
# in order to keep track of what can access what
networks:
mynetwork:
# Define the volumes. We'll get to them later on
volumes:
static: {}
And for the Dockerfile
contents:
- In nginx
# project-root/nginx/DockerfileFROM nginx:alpine# This argument is configurable from docker-compose.yml
ARG confname
COPY $confname /etc/nginx/nginx.conf
2. In node
# project-root/node/Dockerfile__localFROM node:alpine
WORKDIR /usr/src# Expose env host
# This is needed to ensure communication between containers
# between docker containers
ENV HOST 0.0.0.0
# Run server app
# Detect whether you have a yarn.lock already and if so
# just install deps listed on lock file
CMD yarn $([ -f yarn.lock ] && echo "install") && $(yarn bin)/nuxt dev
Time for some Q & A
Yes Milhouse! node:alpine Docker images listed in DockerHub already come with Yarn preinstalled. But letās get back on track.
š¤: Why setup a docker-compose.yml
ready for production when what we want is to setup a local environment?
š¦: Weāre going to create another file called docker-compose__local.yml
next to the original one, that will contain overrides to the default configuration to suit our local needs. That way we can keep our configs DRY.
# project-root/docker-compose__local.yml# Stating version here is important, as Docker will complain if
# we try to override files with different versions
version: "3"
services:
node:
build:
# Notice that in local dev we're referencing the file
# we created to store our local Dockerfile version
dockerfile: Dockerfile__local
# This below is the magic that makes Docker suited for
# local development. We'll get to it later
volumes:
- ./node/src:/usr/src
nginx:
build:
args:
# Remember the configurable argument? We could've
# created another Dockerfile__local in our nginx
# folder, but since it's only a filename change
# we can use an argument and pass it down to
# Dockerfile. Solid!
confname: "nginx__local.conf"
ports:
# Let's also override the exposed port, so we can
# work in http://localhost:9000
- "9000:80"
š¤: Why create a web server (nginx) for our local development? Arenāt we good with just Nuxt.js and its Webpack Dev Server?
š¦: This is true, but weāre replicating a prod environment. As you might know, Node.js is quite good in some things, but sucks when it comes to serving static files. Most production configurations defer static file serving to a dedicated web server, and in our case, it is not the exception. In order to fully replicate a prod environment, we need to serve local static files through nginx, so we can troubleshoot problems in our local, instead of production.
š¤: How do you link Nuxt.js output and nginx?
š¦: You need to configure your Nuxt.js application to output the static files to a different folder than the default one. The key thing is to point to a folder that can be accessed by both Nuxt.js and nginx. And if youāre wondering, no, itās not the one we linked. You will need to add a Docker volume to the docker-compose.yml
. In our case, that would be the volume we defined as static
. Since Nuxt.js by default generates its static output under .nuxt
folder, we need to modify nuxt.config.js
to adapt to our needs:
// project-root/node/src/nuxt.config.jsmodule.exports = {
// This will generate all static files in /usr/static
// a.k.a. our Docker Volume
buildDir: '../static/nuxt',
build: {
// This will append this path to all static resources
// so we can route them easily on nginx
publicPath: '/static/'
} // ... your other nuxt config.
}
We will also need to modify our nginx.conf
to support routing to static assets:
# project-root/nginx/nginx.confserver {
root /usr/share/nginx/html/nuxt; location /static/ {
try_files $uri $uri/ @app;
} location @app {
# Proxy pass to your node docker container
} # ... your other nginx config.
}
More Q & A!
š¤: How is Docker going to pick my local changes to enable HMR?
š¦: With bind mounts, of course. Similar to volumes, bind mounts ensure that files can be shared between containers. Whatās really powerful is that they can be shared as well with the host. Thatās how we can keep a copy of the generated yarn.lock
, see installed node_modules
and see other files relevant to local environment. Of course, donāt try to use your host commands like Yarn to install packages or run scripts, since the installed modules were installed for Linux environment and they may be different in their binaries and supported architecture.
š¤: Hmm, but if I canāt install modules using Yarn on my host, how am I supposed to install them?
š¦: Another goodie from Docker. You can spin up ephemeral containers as binaries in the same way you would use installed commands in your CLI, and they get executed using the defined container environment. This means that you will be able to use all of Yarn, without worrying about versions, and everything will be part of the container. And the best part: if you have bind mounts, all the generated files will be in your host as well (so you can commit them on git):
$ docker-compose -f ./docker-compose.yml -f ./docker-compose__local.yml run --rm node yarn <all other args from yarn>
Of course running this awfully long command sucks, so you can alias it to something more dev friendly. I use alias
for *nix environments, but you can look for the equivalent in Windows, or create a multi-platform solution by adding npm scripts to define the alias.
$ alias docker-yarn="docker-compose -f ./docker-compose.yml -f ./docker-compose__local.yml run --rm node yarn"...then$ docker-yarn add lodash
Also, notice that weāre running run --rm node yarn
. To avoid confusion, run
is the docker-compose command, node
is the name of our Docker service and of course, yarn
is the command we want to execute in that container. If our service name in docker-compose.yml
were named e.g. nuxt-app
, the command we would want to run will be run --rm nuxt-app yarn
.
Another tip: never omit the --rm
flag, otherwise Docker will eat up your RAM piling up containers. Having --rm
flag ensures once the command exits, the container is removed and wiped out.
š¤: One more thing, why use ā__ā to separate file names from their versions?
š¦: It doesnāt really matter. You can use any convention you want! When running the command to build up your application you just need to point to the file, regardless of the name you gave it:
$ docker-compose -f ./docker-compose.yml -f ./docker-compose__local.yml -f ./my-awesome.override.yml up
Ready for a ride?
Now that you have everything in place, just run the below command.
$ docker-compose -f ./docker-compose.yml -f ./docker-compose__local.yml up
If everything went well, all of it should just work: live reloading, HMR, nginx, and you will have an exact replica of your app in production running in your local. Expanding the concept is just a matter of adding configuration to docker-compose and ensuring communication between containers.
What about production?
Thereās a lot of improvements that can be done using this configuration as your base. Having your code split up in containers and making them communicate with each other through Docker makes production deployment and scaling real easy. If youāre using a single server in prod, you can use docker-compose scale feature and use nginx as a load balancer to serve more load with less resources. If you have a multiple server setup, you can expand this concept to Docker Swarm or Kubernetes to gain all the benefits of auto-scaling, service discovery and a ton of features that could help your product respond to a higher demand traffic.
Thereās much more to production configuration use cases, like pushing/pulling images to DockerHub, or any other private registry, Certificate Management to test HTTPS-required features such as HTTP/2 or Service Workers, and more that I will talk about in other posts, including a horror story about losing Letsencrypt certificates.
š Thanks for reading!
This is my first post on Medium. Iāll try to write more stories about the projects Iām working on, including this one, which has become a journey of learning and self-growth. Iām sure youāll see more posts related to it. Donāt forget to call me out in the comment section if I screwed up, or if there are better approaches to local development workflows with Docker, or with similar tools. Or anything else in particular.
Thanks to Dan, il fello and Gustavo for their technical, editorial (and grammar) review.
Also, many thanks to Paul Kehrer, Sean Schulte, and Allie Young, for Frinkiac.com, the awesome Simpsons meme and GIF generator I spend most of my day in. You guys make the world better!