The Phi
Published in

The Phi

Photo by Guillaume Bolduc on Unsplash

My experience of working in a containerised development environment

It has been two months since I moved to a software defined networking domain, HPE Composable Fabric and most of my API development environment is containerised.

By this, I mean that, the builds of my application (a software build is a process of creating an executable which is a tangible result of your development activity) are containerised.

How exactly is it done?

Well, you summon the genie!

Quite simple:

  • At the outset, a container will be created with all the runtime dependencies. You write a Dockerfile for the same, that defines the blueprint of the container (which will have the metadata like what base OS Image is to be used, fetching and installation of pre-requisites, creating users, folders, setting the privileges, etc. in the container).
  • As we know that containers don’t have persistent memory of their own, we checkout to the development branch of our feature, in our repo and mount the code volume to the container. (You can do this from your IDEs too. It is called path mapping sometimes)
  • Once the volume is mounted, the build process happens inside the container and an executable will be generated within it and the service starts to run within the container.
  • With the technique of port mapping, we will be able to communicate with the service inside the container from the host machine/laptop.
  • When you want to switch to a different branch, mount that new code volume and build it again. If you want, then you can create a new container as well with all the above steps.

Following are the advantages or conveniences that I experience because of such a development environment —

  1. I can work carefree with different features of the software. Because every branch of repo that represents a feature is a different container which has the code and the runtime dependencies.
  2. I can deploy, test and switch between multiple features at the same time which otherwise would have been difficult if my development environment where the builds are deployed was a virtual machine. That sense of simultaneousness is really nice.
  3. When some bug fixes gets assigned to me, I can replicate the exact same environment as the test engineer had set up when he/she identified the bug.
  4. I can test it with scenarios that are potentially catastrophic because it hardly takes me a few seconds to kill and spawn a new container.
  5. It is also easy to ship the container images to the peer teams who use/integrate our product.

Not everything comes as a complete package of happiness. It has its troubles too which the developer might have to take care of —

  • Containers are created with layers of libraries and dependencies. These layers are usually cached for future use. Creating many containers might make you run out of storage as they create intermediate layers and those. dangling images remain on our system if we don’t clean them regularly
  • You must ensure that the log files and the debug files are written to the host volume that is mounted. Otherwise, they will be lost, once the container is killed or be sure to access them when the container is still active.

I have been finding it really good and convenient working with such a setup. Overall, I believe that, containerised development environment is very useful in the software development ecosystem that is trying to be agile in nature for faster and better delivery of software.

Thanks & regards,
Samarth Deyagond

More articles from author —



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Samarth Deyagond

Samarth Deyagond


Core Kubernetes Developer @ Gardener | Cloud Native Advocate