How to create dynamic E2E testing environments

Shai Tubul
Soluto by asurion
Published in
5 min readJan 2, 2022

One of the biggest fears of any developer is to deploy a new shining feature and revert it right after since it breaks something. Even when you write excellent tests and verify everything locally, sometimes it’s just not enough, and in the integration - something breaks (and also, your sensitive heart ♥️).

This article will show how we solved this pain here at Soluto by using E2E testing environments. Read on! 🚀

What is an E2E testing environment?

Imagine that you could pack all your 100+ microservices into one bundle, including backend, frontend, DB, queues, and cloud infrastructure, and then deploy it all together with your new feature’s code into a clean environment. This is called an E2E testing environment. You can use this environment to mimic production and feel free to ruin it. Automatic integration tests will run on the environment, and you’ll also be able to verify manually that nothing breaks in the bigger picture.

Sounds Great! How can I implement such a thing?

There are several ways to implement E2E testing environments (like in all software development concepts). In the following paragraphs, I’m going to explain how we implemented it here at Soluto:

Which practice did we choose and why?

When thinking about how to deploy our testing environment, the first thought was, “Why not use the same deploy process as we use for production today?”, but we quickly understood it was wrong for us. Deploying hundreds of micro-services, lambdas, DBs, queues, etc., takes very long and costs a lot. It’s also hard to manage such an infrastructure for each testing environment. So we decided to try to establish the whole ecosystem on one EC2 machine. We searched for a tool that would allow us to deploy and manage our ecosystem under one machine and found Tilt. Tilt allows to define the deployment process in code and works with existing docker images. With Tilt, it’s easy to manage multiple environments, deploy fast, and use the same language for all deployment processes.

Tilt also provides a friendly UI that helps to see the state of each component in the environment, read logs, and re-run one piece without restarting the whole services in the environment.

How to deploy Kubernetes clusters and AWS tools in one machine?

In production, we deploy our microservices using Kubernetes, and for other infrastructure tools such as queues, lambdas, DBs, and more, we use AWS.
We need to find alternatives for these tools when running on one machine.

For running our Kubernetes clusters on one machine we chose to use Kind. Kind is a tool for running local Kubernetes clusters using Docker container nodes, and it is straightforward to use.

We still need to find a tool to deploy our AWS infrastructure on one machine. For this purpose, we chose to use Localstack. Localstack is a cloud service emulator that runs in a single container on one machine. It helps to mock all the AWS infrastructure we need. Localstack is not identical to AWS production tools, but since we test our code and not AWS infrastructure, we feel comfortable using mocked infrastructure tools for AWS.

Configuring and deploying an environment

We defined YAML files that describe the E2E test environments, so each YAML file describes one environment as follows:

  • version: Used to determine which version of the environment to deploy.
  • description: Describes what the environment is going to test.
  • projects: A list of all the projects on Github that we want to run and from which branch.
  • flags: Flags that are injected as environment variables on each run.
  • deployment: Contains some parameters about the deployment flow.

Once pushing changes to a YAML file, a Github Action will perform the environment deployment process. At first, an AWS CDK script establishes a new AWS EC2 instance. On the EC2 instance setup, we’ll pull from Github all of the projects we defined in the YAML file. Then, the Tilt scripts we created will deploy the services by using their docker images. Kind will run the Kubernetes clusters, and Localstack will deploy the AWS infrastructure.

Running automatic E2E tests

All of our services are up and running, and now it’s time to run some tests 🤓.
The first step is to verify that all of the required services for each test flow exist and running in the specific environment (it’s not mandatory to set up all of the services in each e2e testing env). We can do it using Tilt configuration:

k8s_deploy(‘lambda-save-to-mongo-db-test’, ‘3000:3000’, resource_deps=[‘mongodb’, ‘create-aws-resources’])

The test above will run only once `mongodb,` and `create-aws-resources` steps are done, preventing failures such as an unreachable DB or lambda.

These tests run automatically on every e2e testing environment setup and ensure that all work together as expected.

Summary

Creating e2e testing environments helps test features that involve a few services and let us be confident with our code before deploying it. It’s essential to develop an infrastructure that will make it easy for users to create an environment (if not, probably no one will use your tool 😉). As part of your e2e testing environment, try to include a UI that will reflect the status of each service in the environment, so it will be easy to detect and solve issues. Use mocks for parts in the architecture that are not needed to be tested by you (AWS infrastructure, DB, etc.). Make some automatic e2e tests that will verify the main flows work correctly, and this will help prevent some fatal bugs.

I hope you enjoyed and got inspired by the article I wrote.

See you,
Shai Tubul 💫

--

--