Credits to Pankaj Patel

End-to-end test NestJS microservices using Docker and GitLab CI

Robert-Jan Kuyper

--

Imagine a world where developers could mimic production locally, without using mocks. A world where developers even can end-to-end test their applications against real systems, without connecting to the real system. Welcome to the world of Docker.

In this article I’ll give a brief overview of how to end-to-end test multiple microservices using GitLab CI. We will mimic a production environment fully automated locally and in a pipeline. The approach described in this article can also be applied with any other CI tool.

Before reading this article, I strongly recommend to read my previous article E2E test API’s using Docker and GitLab CI first.

To see the working example see: project.

The problem

Let’s assume we have 3 separate (NodeJS) applications, each application has it’s own repository and exposes a single endpoint on port 3000. We call the applications: Server A, Server B and Server C. Where Server A is our current application, Server B and Server C are simply 2 dependencies for Server A. Server A acts as a proxy to fetch data from both Server B and Server C and exposes it to the outside world.

The architecture, where Server A acts as a proxy

Server B and C are both a NodeJS application without any dependencies.

All the examples for Server B and C are exactly the same. Therefor, I only show the cats implementation of Server C in this article. Both implementations are visible at GitLab.

Both applications are exactly the same, with the exception of the data they expose. Server B exposes some dogs and Server C some cats:

To end-to-end test such an application we have roughly 3 options:

Option 1: mock the microservices

In many cases we see developers use tons of mocks in their applications to mock the expected response of a microservice. However, those mocks need to be maintained and created. It costs quite a lot of effort to continuously update those mocks and to keep them in sync with the real system. Making a change in a microservice, forces you to also change the mocks.

Option 2: E2e test against real deployed environments

In order to run end-to-end tests you can also connect with the real deployed dependencies. For sure it is a good idea to do such a test, but this would not be ideal if we want to end-to-end test each feature branch. Let say you also want to load test your system, to test the stability, than you would directly affect the deployed microservices as well. I would consider this approach as second best.

Option 3: E2e test against a dockerized version of the microservice

End-to-end test against the dockerized production environment of the microservice is really a game changer. We can simply add a step in our pipeline to create a Docker image during release and store it on a private registry, in order to use it in other pipelines. Now we can safely run all kind of heavy tests against our system and see how the real world is actually behaving even on a feature branch.

But in order to achieve this, we need to dockerize the microservices first and make sure we always run against the latest version. Therefor, we simply use no tagging of the image so we will always use latest.

Let’s get started with implementing option 3.

Dockerizing the microservices

Before we can connect the microservices, we have to dockerize both Server B and Server C first. The following image shows what we are going to do:

Dockerized microservices approach

When using GitLab we follow these steps:

  1. Create a Dockerfile for both Server B and Server C
  2. Create a .gitlab-ci.yml for both Server B and C to build and push the images to GitLab’s private registry
  3. Use the images of GitLab’s private registries

1. Create a Dockerfile

Our first step is to create a Dockerfile for both Server B and Server C that uses node:16.9.0 as a base image:

That’s it!

In case you have dependencies, like node_modules don’t forget to add a .dockerignore.

2. Build the Dockerfile using GitLab CI

With the Docker Container Registry integrated into GitLab, every GitLab project can have its own space to store its Docker images. Each repository has a private registry available, so a registry for both Server B and Server C. To push an image into the registry we can create a .gitlab-ci.yml:

Let’s break down the most important parts of the file:

  • First we assign a default job and use docker as base image and docker-in-docker as a service, and we expose docker-in-docker at the docker hostname.
  • To login in the private registry GitLab provides a unique token for each pipeline run called CI_JOB_TOKEN. We use this environment variable in combination with the by GitLab provided username gitlab-ci-tokento pipe it into the docker login command. We will use this in a before_script to login before our job runs.
  • Now we can safely build and push the image to the private registry. Note the CI_REGISTRY_IMAGE provided by GitLab. For an overview of GitLab CI variables see the official docs. We add a suffix dogs for Server B and cats for server C.

3. Use the image of GitLab’s private registry

In GitLab each registry use the following naming convention:

<registry URL>/<namespace>/<project>/<image>

In our example we use the default GitLab registry, so our images are stored at:

registry.gitlab.com/m6093/server-b/dogs
registry.gitlab.com/m6093/server-c/cats

We can now pull the images using Docker Compose:

In GitLab we can use the same images, but with a slightly different syntax:

The hostname of the container can be overwritten using the alias keyword.

The main application

Now we can finally create our main application, and connect with both microservices. Our main application can be written in any language of choice. In our case, we have a sample application that exposes 3 routes, 2 for proxying and 1 for health checks:

  • HEAD /health
  • GET /cats
  • GET /dogs

The GET requests proxies to either Server B (dogs) or Server C (cats). Using environment variables, we connect with Server B and Server C. Our main application will be exposed on 0.0.0.0 port 3000. To have a productionlike development environment locally we use Docker Compose:

Note that we expose 2 environment variables: DOGS_URI for Server B and CATS_URI for Server C. Using dockers inter-container networking we connect with the microservices. To verify if everything works as expected, clone the repository and run docker compose up or docker-compose up.

End-to-end test the main application

Now we have our main application and 2 dockerized microservices running, we can start with end-to-end testing. In our case we use Supertest and Jest to test if we can get some dogs and some cats:

Run the tests via npm run test and everything should look OK.

Putting it all together

We can now create the pipeline file to start both microservices and our main application in detached mode, in order to run our tests in GitLab CI. To mimic the environment in GitLab CI, we’ll use GitLab services.

According to GitLab:

The services keyword defines a Docker image that runs during a job linked to the Docker image that the image keyword defines. This allows you to access the service image during build time.

Consider the following .gitlab-ci.yml:

Let’s break down the most important pieces:

  1. To connect the API we defined both CATS_URI and DOGS_URI with http://cats:3000 and http://dogs:3000.
  2. To expose the microservices we use 2 services, each with it’s own hostname alias.
  3. To start the API process in detached mode in the e2e:test job we run npx nest start followed by an ampersand (&).
  4. To verify that the API is up and running provide a health endpoint that supports HEAD requests, so the NPM packagewait-on can ping the API until it returns a 2XX status code.
  5. Finally, run some tests.

Verify the pipeline

Now we can check the logs in GitLab, it ends with something like:

Voilà our pipeline is done and works smooth!

Follow me for more interesting topics! And don’t forget to clap. See the project for a full working example. Happy coding!

--

--

Robert-Jan Kuyper

Senior Backend Engineer specialised in NodeJS, NestJS, Docker and CI/CD | https://datails.nl/