Use Docker to streamline your development experience

Michael Sholty
Jul 17 · 6 min read

Objective

In this article, I will outline how to set up a local development environment on your machine using docker-compose to orchestrate the separate parts of a tech stack consisting of:

  • Front end (client)
  • GraphQL API (API)
  • MongoDB (database)

The objective is to be able to containerize each part of our application and run the entire thing with a simple command: docker-compose up.

I want to point out that the documentation for docker and docker-compose is very detailed. I highly encourage you to read the Getting Started guide on the official Docker homepage. Once you feel good with the basics, come back and follow along as I help you set up a practical bare-bones full-stack application.


Why Use Docker?

You may be asking, “What benefit does Docker add to warrant introducing it into this repo? I can create and maintain a web application without it!”

That’s a good question! There are many simple web applications that would consist of just a web app or just a server hosting static content. However, once you have a lot of moving parts and teammates working with you, a tool like Docker can make working on your project a consistent flow no matter whose computer you’re working with. For our example specifically, it eliminates the need to do things like install MongoDB locally.


Getting Started

Clone this repository locally to get started.

Note: The master branch is ideal for following along with this article. The docker branch will have the necessary Docker-related files added so Docker works as expected.

This project has the front end and API written for you, and you can start them up yourself. The instructions on how to start both client and api are in the README.md.

There are a few issues:

  • You need to install mongodb globally before running the application. Mongo databases can differ in versions, which can complicate things quickly if you maintain multiple projects on one machine.
  • You need to run two separate commands to start the application, and more than that if you need to remember to start your local mongodb.
  • You’ll need other dependencies to make sure the application runs correctly. Common ones are node, yarn, and npm. Yes, it’s totally reasonable to install these on your machine without Docker, but these are just a few examples of things that can change slightly, thus making your development environment slightly different from another one, causing unexpected issues. These types of dependencies are not enforced from within the package.json .

Dockerfile for API

First, let’s containerize our API. We will need to create a file named Dockerfile in the api directory to start. Remember, a Dockerfile doesn’t end with an extension like .txt.

This is a small file, so let’s talk about all the parts here.

First of all, we need to define a FROM so Docker knows what image we’re basing this Dockerfile image on. It seems like magic, but simply defining our image FROM node:12.6.0 gives our image a ton of functionality out-of-box!

Remember, a Docker image is a virtualized environment, not very different from a run-of-the-mill Linux computer, so it has its own filesystem, separate from our computer. We use the WORKDIR command to define where our Docker file should work within. Like with your own machine, you wouldn’t normally install a project in your computer’s root directory, so let’s tell Docker to navigate to /usr/src/app.

COPY . . will copy all files and folders in the same directory as our Dockerfile to the working directory we’re in within the Docker image.

On first glance, EXPOSE 4000 may appear as if it exposes the port so you can access it outside of the container. However, according to the docker documentation, it simply serves as documentation to you and other developers as to which port should be exposed at runtime. More on that later!

RUN yarn will simply install all the dependencies in our Docker container so when you run the container, all the dependencies are there as expected.

CMD ["yarn", "watch"] will run the yarn watch command in this container.


Dockerfile for client

Our Dockerfile for the front-end project won’t be drastically different.

The main difference here is that we declare that port 1234 should be exposed since this is the default port that parcel uses to serve the web app. Additionally, the application is started with the command yarn start. Note that we use a node:12.6.0 image here as well. You might have thought, “Maybe there is a Docker image optimized for front-end development? Maybe a React or TypeScript image?”

While I’m not doubting that there is, understand that it’s simply not necessary. We just need to have a node environment available, and all the other tools we primarily interface with on a front-end application are installed via the package.json.


Building Our Images

Once we have our Dockerfile for each application defined, we need to build our images so they’re available locally. The end result of this section will be that we’ll see the appropriate images displayed when we use the command docker images ls .

Building an image is very straightforward. From the project root directory, run the command docker build -t client ./client to build the front end, and docker build -t api ./api to build the GraphQL API.

Once those commands finish, you should see some output from the command docker image ls like so:

You can see that there is an image for api and client, but also one for node. That’s because our two images are based on that image, and Docker needed to have that image prepared before building the image for the other two.


Running Our Application

Once these images are built successfully, you could technically run each of them with the docker run command, but we want to take the next step and orchestrate these containers together with docker-compose. Also, we are still missing our database!

Let’s create a docker-compose.yml in our project’s root directory:

In this file, you will see some familiar configuration. Here we’ve defined our three services that need to work together to make our application work. The api and client shouldn’t come with much surprise — we have already built those images locally and the configuration above simply tells docker-compose to use those images.

The mongo configuration is interesting. We don’t have a mongo folder or project in this repo, so what’s going on here? Similar to how different versions of a dependency are available on npm when you install a package, Docker has each version of mongodb already built in an image for you to simply pull down and use.

Once you’ve added this file, you can run the application with the command docker-compose up. You should see some output similar to this:


A Few Notes

You can run arbitrary commands within a Docker container by using the command docker exec <image> <command> . To find the name of the container, you can type docker ps and all of your currently running Docker containers will display:

You can use the CONTAINER ID or the NAMES column when referring to a container. The name is really long in our example because the name I gave the repo is really long.

Try running docker exec <id for api> ping 0.0.0.0:27017. This is like running ping 0.0.0.0:27017 from within the api container. Notice how it can’t find anything listening on that port! This is because each container is a separate image, so of course mongo is not running from the api service. If you did want to ping the mongo container as intended, you could run docker exec <id for api> ping mongo. Notice how you get successful responses back!

Better Programming

Advice for programmers.

Michael Sholty

Written by

Software Engineer, formerly @Feathr, @Disney, @FanDuel. Constantly looking for ways to protect myself against myself.

Better Programming

Advice for programmers.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade