Providing a standardized development environment with Docker for all developers.

Felipe Bergamin
ProFUSION Engineering
12 min readMar 12, 2021

Imagine yourself being a part of a software development team (shouldn’t be hard, right? 😁). This team is growing fast, with new developers arriving each week. And all of them are excited to start working on this amazing new project.

But, first, they need to prepare their machines. Install Node, yarn, a SQL database. Some of them need to reinstall the operating system, because they have a lot of conflicts with old tools, that were installed so long ago they didn’t even remember them, until now.

So, they finally have a fresh operating system. Now, which version of Node they should install? The latest one? A LTS? But wait, some of them are still having doubts about how to install these tools, and that’s not all. In this group, we have MacOS, Linux and even Windows users. The installation processes are different on each platform.

How much time these new developers will need to get a working environment? Probably almost a day, or so?

And what if they have some problems with the setup? A day and a half, maybe? Furthermore, how much time will the older guys waste helping them, every week?

After a while, you will probably be dreaming of a way to provide each developer with a ready-to-go, standard environment, so they can start the job as quickly as possible. An environment where they all would get the same Node and database versions, the very same configurations, and those precious extensions on Code Editor.

Also, the benefits go beyond these. Using docker, you can easily ensure that your team develops the system on an environment that matches your production servers.

Imagine a developer writing code for Node v15, using some of the newest features from ECMAScript, but your servers run Node v12, and don’t support these features. Yes, your CI tools should catch these problems, but there’s a way for you to save time and prevent developer’s frustrations from the start.

You can have all this and more, and the only thing you need to install is Docker and docker-compose.

Sounds good, right? So, what are we waiting for? Let’s make this dream come true. 😄

First of all, what will we build here?

We will simulate a real project using a NestJS project with a MongoDB database, and we are going to modify this project to make it feel like your dream that we talked about moments ago. But you can easily adapt it to other languages, or more complex environments including redis, Nginx, Apache Kafka, etc.

Please note that this article was created thinking on Visual Studio Code, we are going to use an extension developed by Microsoft®. Feel free to search equivalent extensions for other editors.

Let’s get started. What do I need to install?

- Docker

- docker-compose

- Remote Containers Extension

Note: Please, don’t install the Snap packages. At the time this article was written, it was not supported by the extension.

You also need to put your user in `docker` group. On Linux, you can do that with this command:

sudo usermod -aG docker [replace with your username]

You need to restart your session for this change to take effect.

Cloning the repository

If you wish to replicate what this article proposes, I created a simple project that connects to MongoDB and exposes some REST endpoints to read and create our data. I’ll use it from now to the end of the article, and you can clone it to use the same project as I.

git clone https://github.com/profusion/docker-article-backend.git

Please note that “Cats” resource was created using “Getting started” recipes available on NestJS website, and the folder structure was reached running command “nest g resource cats”

Some important things to note:

  • We have in this project a connection to a MongoDB database at a host which network address is mongodb
  • We created 3 routes: create, findAll and findOne. The behavior of each route is self explained.

Preparing Visual Studio

Once you have cloned the repository, open it on the editor. If you haven’t installed that extension yet, now is the time. 😄

You can quickly install it by pressing Ctrl + P, pasting the code below and pressing “Enter”.

ext install ms-vscode-remote.remote-containers

Writing our docker-compose file

Let’s think a bit about what we need to provide. We need a container with Node and its tools like npm, yarn.

Also, we need a MongoDB instance, and remember that our NestJS project will try the connection to the address “mongodb”. If you are new with Docker and docker-compose, this can sound weird. How we’ll make this name to be resolved correctly to the container running our MongoDB instance?

Well, we will not get deep into how Docker and docker-compose works, but actually, this is very simple to do when using docker-compose. By default, the external world can’t reach our containers through the network. We need to tell docker when we want to EXPOSE a specific port from our container to the outside. Further, we can create networks between our containers so they can reach themselves using their names. That’s awesome when we work with docker.

Managing networks can demand some commands on the console. That’s one reason why we are going to use docker-compose, it’s an amazing orchestration tool that allows us to simply write in a file all the containers we want (called services here), file system volumes, networks, and more.

Create a file at project’s root directory named “docker-compose.yml” and put this content inside:

What we are doing in this file? Let’s talk about it line by line.

Line 5: just instructing compose to name our container as “backend-cats-app”, if we don’t set the name, compose will build a name for us.

Line 6: we’re telling docker-compose to use the image “node:lts-alpine” as the base for this app service.

Lines 7 and 8: just linking this service to our app_network. This will make possible the communication between our two containers simply using its service names.

Lines 9 and 10: here we expose the container port, this means that our service inside the container can be reached by other containers and our host system.

Lines 11 and 12: Here we map the container port to our system port. This way, if you access http://localhost:4000 you will be, actually, hitting the container at port 4000. You may be asking now what’s the difference between these two lines to the “EXPOSE 4000”.

Well, with EXPOSE we’re exposing our container port to be reached by our own system or other containers, using docker network or even the container IP (yes, containers has private IP addresses, but this is for another article).

This “ports” instruction maps our system port to container port, to make it available even for external hosts. If we remove these lines, we need to access the service using the container IP, doesn’t feel intuitive, right?). On another hand, if we remove the “expose” instruction, our container will not have the port exposed, and we’ll not reach the service running. So we need both instructions.

Line 13 defines “mongodb” service as a dependency for our container. So compose will start our database first, and then, start our app. Please note: this “depends_on” instruction doesn’t check if our “mongodb” is healthy and accepting connections, so depending of your environment, you may need to use a tool like “dockerize” to force the “app” container waits until the database service starts to accept connections. For example: if you need to run migrations on app startup (for relational databases), the database may not be ready yet, and your migrations will fail.

Lines 15, 16 and 17: declaring our MongoDB instance, using a bitnami/mongodb image available on Docker hub. We are using the Bitnami image here because it makes easier to set the user and password for our database.

Lines 18 through 21: setting the environment vars to configure our database credentials. This are specific for Bitnami’s image.

On a real project, we should be using secrets to keep the credentials secure. But it’s ok to leave it hard coded on files for this article.

Lines 22 and 23: We’re mounting our volume at “/data/db” inside the container. This directory is where db data is stored, so we create a volume to make the database data persist even when container needs to be recreated.

We call this as “named volumes”, there are another ways to manage volumes using compose. You may want to read more about it on official docs.

Lines 24 and 25: link mongodb container to our network. This makes our container reachable by other containers (also linked to this network) using only the service name.

Lines 27 and 28: create our volume to be used on MongoDB.

Lines 30, 31 and 32: just creating our network. Setting it on services above isn’t enough, we need to create it here.

Why didn’t we use a Dockerfile?

Well, we could use it too, but this will give us one more file to think about. And we are going to create two more on next steps, so let’s try to reduce how many different files we have to keep the article easier to read.

What happens if we don’t set a custom network?

A default network will be created and all the containers will use it.

But isn’t what we are doing with app_network?

Awesome question, yes… and no. This default network created doesn’t provide name resolution between our containers. So if we want this feature, we need to create our own network.

Putting it all to work

Once you have installed that extension, you may notice a new icon on the bottom left corner at your editor.

Extension indicator at bottom left

Pretty beauty, huh? 😄

But this extension does lots more.

Click on the icon, your editor should show you the options below.

Extension’s options, select the pointed option to get started

Take your time to read all the options you have. When you finish, choose the option “Add Development Container Configuration Files” (highlighted on image above).

At this point we have some options like using just our Dockerfile (if it exists) as base, or using the specifications from “docker-compose.yml” file. Choose “docker-compose”.

Choose “From ‘docker-compose.yml’”

You can now select which service you want to connect your IDE to develop.

Selecting which service we are going to use

Select “app” and go ahead. Don’t worry about our database, it will be automatically started along with our app.

We have now two new files inside a “.devcontainer” directory. Let’s talk about this files.

.devcontainer.json

Here we have some configuration options, like which service from docker-compose you want to attach (if you change the service name, you need to change it here too), which shell should the editor open at integrated terminal, you may want to replace `null` by `/bin/sh` here.

docker-compose.yml

Another docker-compose, huh? Right, here the extension allows us to extend the declarations made by our root file.

Please, remove the `command` instruction at the end of the file. If should be like this one:

command: /bin/sh -c “while sleep 1000; do :; done”

Why? Because this line overwrites our command that starts the container and keeps it running. If we don’t remove this line, our container will start with this command instead starting with “yarn start:dev” as defined by us, using other words: our Nest app will not start serving and we will not be able to access the resources.

Also add this lines:

working_dir: /workspace

command: sh -c “yarn && yarn start:dev”

Note that in this same file we have a volume instruction mounting the projects directory at “/workdir” inside the component. So we set the working directory for `/workdir` and our command declared by our first file will be executed at the right directory.

Our final “.devcontainer/docker-compose.yml” file, if we remove all the comments, should look like this:

After these changes, click again on extension’s icon and now press “Reopen in container”.

Extension’s options again, now we should pick up “Reopen in container”

It should take some time to pull the images and build the containers. When the building is done, the editor will become available, but with some differences.

Why splitting this configuration between two compose files?

We are using each file to declare the configurations specific for its own environment. For example, the volume and working_dir probably will be /workdir only in the development environment. The same logic applies for command instruction, we probably will not use start:dev everywhere. So we set these options on extension’s file. While our “docker-compose.yml” at project’s root only declares the options that could be applicable for other environments as well, like port mappings, other services (like our MongoDB, for example), and the base image.

We didn’t set any entrypoint or command at root’s file because we don’t need it for the article’s purpose. But in a real environment, you will probably need to do it.

Why share a volume instead of copy the files to the container?

I believe it keeps it simple to access the files, doesn’t matter if from the inside container or from our host system. For example, if we choose to copy the files inside the container, and while development we want to use a git GUI or any tool that isn’t running inside the same container, we simply can’t, because all the modified files are inside the container, and the files which are accessible for our host system, actually, will be outdated.

Also, we couldn’t forget that containers have an ephemeral file system. If we copy our files, start working and for some reason, we lost our container before we can commit and push to a remote repository, we would have lost all our modifications (it isn’t a risk we can take, do you agree?).

Containers running! Bye bye Kansas.

Congratulations, we now have the promised environment running and ready to go.

Note that the editor’s integrated terminal looks different, right? It’s attached to the container, and all commands will run in the container’s context. Try running “node -v” or “ps”, or even “cat /etc/debian_version”, and you will see it is not your system anymore.

Note that the icon on bottom left now shows the workspace name you’re running. Feel free to change the name prop at devcontainer.json.

Changing workspace name
Applied name to extension indicator

You need to rebuild the containers to make these changes take effect. To do that, click on the extension icon and choose “Rebuild container”.

Wait, what happened to my extensions?

They’re gone, I’m sorry. But it’s not forever, don’t worry. 🤪

Actually, they didn’t go anywhere, it’s our editor that is running in another context now, like a new machine (or a new container 😊), where we don’t have any extensions installed.

You may think this is bad. But let’s explore how we can handle this.

Open again that file “devcontainer.json”, look for this line:

// Add the IDs of extensions you want installed when the container is created.

“extensions”: []

The comment spoils it, but we can define here all the extensions we want to have installed on this workspace after the container is created. So we don’t need to manually install it every time.

With this setting, we can have all developers using the same set of extensions, and nobody will be running weird extensions anymore (but they can still install it manually).

If your team uses some very useful extensions, or you wan’t everyone running the same linter extension and code formatting tools, you can add it here. To do that, open the extensions navigator, search the one you want to install, click on gear at the side and then “Add to devcontainer.json”. Or just copy the extension id and paste it in the “extensions” array.

Adding an extension to container

Now the extension should be declared on file, inside that array, and it should be installed automatically after container creation.

Make it the same for all extensions you want to have installed.

Using other container engines

If you need an alternative for Docker, you can explore Podman.

Just add this setting to your vscode settings making the extension use the podman binary instead of docker:

“remote.containers.dockerPath”: “podman”

Remember: this is on vscode settings, press Ctrl + Shift + P (Cmd + Shift + P on Mac), search for “Open Settings (JSON)”, paste it and save.

Conclusion

We are now using Docker and docker-compose to set up a development environment which will be exactly the same for all developers. And if you haven’t had much experience with Docker before, you might have learned something new about it here.

Setting up all the files and fixing all the issues that may happen might demand time. But you’ll only have to do it one time. And then everyone on your team could use this, no matter which operating system they use, or if they already have a MongoDB installed with a different configuration. This should not be a problem for you anymore.

--

--