Develop the DevOps way — Part 3

Abdessamad Bayzi
OCP digital factory
5 min readSep 30, 2019

Welcome to part 3 of this DevOps series where I share about my experience experimenting some technical practices, that aims to ease some tasks when developing, and making development environment more similar to a production one, so the time spent on debugging environment problems may be used for coding and infrastructure improvements.

In part 1 we’ve seen the struggle of having a non-automated development environment, what are the benefits of having an automated one, and how we can achieve this easily with Vagrant.

In part 2 we’ve seen more details about how this development environment is made, what’s Ansible and how to use it.

In part 3 we’ll see other best practices: Using Makefile, docker-compose and multi-stage builds in Dockerfiles

As a full stack developer, you may be using the following development Kit in your daily activities:

  • ReactJS: for the frontend
  • Spring-boot: for the backend
  • MongoDb and Mysql
  • Hopefully Docker so you don’t have to handle Mysql and MongoDB installation
  • git as a version control system for your code

No need to remind you why you should be using docker. Automation and rapidly deploy server environments in containers are the main keys.

However, in the majority of — in-production — cases there is a separate container platform (Dev, Staging, Prod …). The container we build, test and run locally, is part of a larger context. One issue that often pops up, is that when you share docker code parts within a team, or the world, you will also need to explain how each of your containers (front, backend …) requires to be run.

Using docker-compose will make for sure things more easier, but it’s not enough :(

But imagine you only have to provide one line to test, build and run the application:

$ make run

you can easily ask your colleague to test your product, without having him/her to read a big README. It either works or does not.

I’m assuming you’re familiar with docker and docker-compose. Let’s see how to get the work done.

Makefile

As we’ve seen in Part 1, your sandbox contains all necessary tools (Linux, docker, git), if you are note using the sandbox, make sure to install docker, git and a shell tool of your choice. Docker install instructions are here: Ubuntu, Mac or Windows

Makefile is not something new, it’s a 40-year old classic that’s been reborn with automation projects. Here’s my version of the Makefile

And yes, I’ve used docker-compose everywhere (so proud of it 😁 ), It makes the hole automation stuff easier. Before sharing the docker-compose file, let’s see briefly what this Makefile does.

While the help target in the Makefile says it all, I prefer to provide more clarifications:

$ make run

Runs all the application (Frontend, Backend and a mongodb instance), but if you prefer running only the frontend or the backend service, you can run

$ make run svc=backend

the services names are those defined in docker-compose file.

Behind the scenes, before the run is made, frontend and backend images are built using Dockerfiles, and tagged with your commit sha1 from git log. The two Dockerfiles I’m using benefits from the docker caching layer principles, so you can focus on what matters the most, Development, instead of waiting for image to be rebuilt for each minimal modification in code. Be patient we’ll get to Dockerfiles soon.

$ make build
$ make build svc=frontend

you can use the build target in order to only build the application images (or one of them) without running it.

$ make prune

This target will delete all docker images in your system, so be careful with it.

$ make clean-db

When using database, It could become easily messy with dummy data that you inject, in order to test the application. So cleaning data is something you’ll be doing very frequently.

Docker-compose

Here’s the docker-compose file I used in my implementation:

I guess everything is clear here, just a simple docker-compose files that defined 3 services: frontend, backend and mongodb. Service definition consists of the path to Dockerfile, ports to map and expose, and some environment variables to use.

Dockerfiles

Finally, we get to the Dockerfiles. In order to use docker for development, we can’t accept any impact on the developer, either on term of complexity or time to build and make some value fast.

If you’re using some kind of CD pipelines in your project, you should know that the base image used for build is different from the one used for runtime. Why ? This way you can keep your docker images as small as possible, and save time in your pipeline execution. You can do the same when working with docker in your local machine, it’s called Multi-stage builds.

However, if not setup correctly, It will be slow as hell or, even worse, some unnecessary steps will be performed for every simple modification. Imagine having to install npm dependencies for each modification without even affecting package.json file. What a nightmare 😠.

In order to create images, docker uses layer. Each command that is found in a Dockerfile creates a new layer. Each layer contains the filesystem changes of the image between the state before the execution of the command and the state after the execution of the command.

So if we make sure that the layer created byRUN npm install doesn’t change unless the file package.json changes, we we’ll benefit from the power of docker cache (The same goes for pom.xml for spring boot for example). The following frontend and backend Dockerfiles are implementations of this principle:

In both Dockerfiles, we copy dependency files (package.json and pom.xml), we install dependencies, then we copy the source code and run the build. This ensure dependencies installation is performed only if a new dependency is added to dependency files, otherwise we use the docker cache layer of the last build.

The generated artifacts are then used in the next stage, with an Entrypoint command to start the application for example.

I hope you enjoyed reading this post. I’ll be glad to answer your questions if you have any. Loved it? Don’t agree? Talk to me in the comments below. And Clap clap clap 👏 😁.

--

--

Abdessamad Bayzi
OCP digital factory

DevOps & Plateform Engineer: Docker/Kubernetes/Openshift, CI-CD, GitLab-CI/Jenkins, Elastic stack, Ansible, Python/Java