The DevOps Idea — Part 1: The tale of dependency conflicts
We have heard a lot of buzzwords flying around for sometime now about something called DevOps. Is it just SysAdmins who also want to do software engineering? Or is it software engineers who want to do SysAdmin stuff? Or is it an engineering role which revolves around the idea of automate all things? We can perceive DevOps in many definitions. In fact, if you search on LinkedIn for the job listings of DevOps Engineers, you would find amazing job descriptions. Most of them will not match the other. This clearly shows that there are various cultural and engineering understandings of the awesome DevOps idea.
So, what exactly IS DevOps?
Most resources on the internet will tell you that DevOps is essentially the combination cultural practices, and engineering tools that pave the way for faster evolution of products and facilitate quicker maintainability. So, to put it in simplest of terms, a DevOps practice will help you release code with a reduced fear of taking production down (for the most part) or even release new features quickly without having a lot of “ops overhead”. DevOps environment is where the team works together until the feature or the product gets to production. Not just that. Following the basic principles of DevOps, it is also made sure that the culture in the organisation is blame-less. In short, what I am trying to say is:
Its not a problem for A team but for THE team!
In this series of posts, I will mention few tools that advance the adoption of DevOps culture and practice. We are going to start right from the point where the developer builds the code locally, to the point where we begin troubleshooting. To be honest, there are just too many tools out there and mentioning a select few might be plain ignorance. If you have more tools that you think are worth exploring in a particular scenario, I would appreciate that you share them in comments or with me on my twitter.
Packaging — The tale of dependency conflicts
The point at which the business idea gets converted into releasable product begins at the point where the Developer starts writing his code.
The journey of the application begins here. This is the point where the hours of design and implementation discussions are finally causing effect. We know this phase is important. We know that if we don’t make this code go through most of the harsh realities of the Production world right from the beginning, the application might just give up on its life(cycle). The code should also be protected. It should get all the dependencies it needs in the right condition and in the right version. It should also get the safe space of the version of the language it is being written on. The code should also not receive any unplanned stimuli from its surroundings in the production environment. Because when the code fails, the application crashes, and all hell breaks lose!
On the less dramatic side of things, what do we do? Do we always package fat-jars or venvs and hope that the same JDK/JRE or Python version is running on the great realm of servers? Or should we trust the mighty VMs to be always configured right when the so called “requirements exchange” between Dev and Ops happens? What if that doesn’t go as planned?

So, what we want is that Production and Dev environments should be as close as possible. After some research and working on few tools myself, I found there are two awesome methods that can allow us to bridge-the-gap: containerise the application(Docker) or define all environments using the same configuration file (Packer).
Docker (a.k.a Moby)
Docker is a software that allows us to build linux containers through the use of a configuration file called Dockerfile. To keep it short, a Dockerfile contains a list of commands following which Docker builds a container image. Any computer running the docker-engine software can spin off containers based on this image. As each container runs in its own isolated environment, you can safely eliminate the fear of your application being affected by unmatched versions of dependencies on the server. Docker Engine can run on Windows, MacOS and Linux boxes.
Docker images can be shipped independently and even published to a central registry hosted by Docker, Inc. or your own registry. There are various other docker registry SaaS available; one hosted by AWS is called AWS ECR.
Okay wait! Linux containers? What if my application runs on Windows?
Docker also supports Windows containers. This means that applications running on Windows can be packed into Docker images and deployed as Docker containers on Windows powered servers. As of the day of writing this post, Windows Docker containers cannot run on Unix powered systems like Linux and MacOS.
In the adoption of Microservices Infrastructure, a lot of thinking is put into running everything inside containers. These not only include applications, but also databases(MySQL, MongoDB), caching engines (Redis) and even message brokers (Kafka). While doing this in production can sound like a challenge, setting up dev environments using just Docker containers might not be a bad idea. Developers can quickly start off by setting up their environment by just few command line instructions and tear it down just as easily. For example, this what you have to do run MongoDB(v3.4) on your local machine:
$ docker run --net=host mongo:3.4If software stack versions are well communicated among the developers, they can just go ahead and pull the docker images of the required external dependencies and simulate the whole production environment right on their desktops in a matter of minutes. Docker also provides a clustering tool called docker-compose. Docker compose is used to build a network of containers and start them up or shut them down in one command. I will discuss more about Docker in the other posts.
Packer
Sometimes its okay to dedicate an entire Virtual Machine to a single application, or containerising an application might not sound like the best idea. May be containerising in local environment is fine, but in production, the app will have to be an EC2 instance. How can consistency be maintained here? Packer provides tooling to build any kind of machine image by defining a unified configuration file. Packer is a lightweight engine. Here’s what happens in Packer at the highest level:
A user defines a Packer template which consists of instructions on how one or many Packer builds should be performed using various Builders. The building process can use variety of provisioners to configure the contents of the image. At the end of the build process, artifact(s) is(are) formed which is(are) a deployable end product.
There are plenty of builders available. You can create an AWS EC2 image, a Google Compute Instance image, an Azure Virtual machine, and even a Docker image all in just one configuration file. The configuration file can utilise the power of provisioners such as Ansible, Chef or Puppet. Packer can also run several post-processing commands that allow you to publish the image to the required repository right after it is built correctly. For more information, you can refer to the Packer Getting Started guide.
Conclusion
In this post we have seen two tools that help us in unifying the dev and the production environment. Docker is used to build container images that can run using Docker Engine, while Packer will build images that can run on any target platform. If you are asking which one is better, I would say what most people do:
It depends!
To conclude on one, you should analyse the situation yourself. Can you run Docker everywhere? or You need really really really need the independence of building images on different target platforms? Some things are always “good to have”, but one should not eliminate other practical and more important constraints like for example: deadlines 💀. There are many constraints that restrict the selection of tools. It’s always a good idea to make such adaptions a team decision. After all, DevOps is all about unity.
Finally, through the use of these tools, we can avoid making the world’s most popular code execution failure reason:
Works on my machine. Why not on yours?
Questions and interactions are welcome. Here’s my twitter and linkedin. Please follow and share if you have liked the post.
