Executing cypress tests within isolated docker network

Ayush Kulshrestha
MiQ Tech and Analytics
4 min readJul 11, 2024

Ayush Kulshrestha, Senior software engineer, MiQ

Problem Statement

Executing cypress component tests without a dedicated PR environment in build jobs and running such build jobs parallelly.

Background

Cypress tests are executed for each PR build and release build. For running cypress tests, we require the frontend webpack server to be up and running and deployed in an environment. Currently, we deploy this server on a dedicated PR environment on EKS. This environment is deployed whenever a PR is raised or a branch is merged to the master.

Since there is only a single PR environment, we can not run multiple PR builds or release builds in parallel. Each build requires its own environment for component tests to succeed.

We are currently using a Jenkins shared lock to run a single build at a time and queuing other builds. This acts as a bottleneck in the “Release When Ready” framework given that each PR build takes around 30 mins to complete. It is time-consuming for a developer to wait for 30 minutes or more if multiple builds are waiting in the queue.

Additionally, the PR environment is only used by build jobs and remains idle for most of the time. It is a waste of resources, cost, and time.

One solution would be to use an on-demand PR environment on Kubernetes, but the setup and teardown is complex.

Solution

  1. In theory, it should be possible to deploy a temporary environment inside the Jenkins build pipeline to run and test the CTs, just like we typically do for backend CTs.
  2. We can utilize the docker network to create isolated environments for parallel builds.
Figure 1: Block diagram for CT setup in Jenkins slave machine

Note: To run the cypress tests, we use another docker image which contains an environment for headless browser, cypress code, etc.

Requirements

  1. Two separate docker containers to be spawned with each build job. These docker containers will have:-
    a. Frontend application (a child micro frontend in our case)
    b. Image to run cypress tests
  2. These docker containers should be able to interact with each other
  3. These docker containers should not interact or interfere with other sets of docker containers spawned by another build job.
  4. These docker containers should not interact with the host Jenkins slave machine or block any of its ports or processes.

Changes required in UI application

We need to build and run the docker image for the UI application and get its IP address to pass in the Cypress test runner container.

application-ui.Dockerfile

# Using commons nginx image with brotli support
FROM xxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/xxxxx:ngnix-brotli

# COPY dist folder to nginx html dir
COPY dist /usr/share/nginx/html
COPY cypress/nginx.conf /opt/bitnami/nginx/conf/

WORKDIR /usr/share/nginx/html
COPY ./configGenerator.sh .
COPY .env .
RUN chmod +x configGenerator.sh
CMD ["/bin/bash", "-c", "/usr/share/nginx/html/configGenerator.sh && nginx -g \"daemon off;\""]

Note: We are already using the Kubernetes config map to inject the nginx config in the Kubernetes deployment. We can utilize the same file for copying the nginx config inside docker.

Before building and running this image, we need to create a docker network to ensure isolation amongst builds and interaction between the containers.

sh ''' docker network create ${network_name}'''

Next, we need to build and run this image in Jenkinsfile. While running the image we need to attach it to the docker network created.

sh '''
yarn install
yarn run build
docker build -t <repo>:${image_name} .
-f application-ui.Dockerfile
docker run --network=${network_name}
--name ${container_name}
-d <repo>:${image_name}
'''

Once it is up and running, we need to fetch its IP using the following command:-

APPLICATION_IP = sh (
script: docker inspect -f
'{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
${container_name},
returnStdout: true
)

We need to pass our network name and application-ui container’s IP address in this image. This can be done easily using the following command:-

agent {
docker {
image '${ecr-url}/${cypress-runner-image}'
args ' - network "${network_name}"
- add-host <product.yourorg.com>:${APPLICATION_IP}'
reuseNode true
}
}
steps {
script {
sh '''
unset DISPLAY
yarn install
npx cypress install
yarn cypress run
- config baseUrl=http://<product.yourorg.com>
- browser chrome
- spec cypress/spec.js
'''
}
}

Notes:

  • The “<product.yourorg.com>” domain is used only to access application-ui’s frontend server from the cypress image. This domain will not be accessible outside the Cypress runner container.
  • Make sure to create variables like network_name, image_name, and container_name as a combination of your build number and commit id, as build jobs might be running simultaneously, and using the same variables may cause conflicts.

Once tests are executed, we need to tear down the setup regardless if the tests ran successfully or not.

sh '''
docker stop ${container_name}
docker rm ${container_name}
docker image rm <repo>:${image_name}
docker network rm ${network_name}
'''

Impact

  • We were able to run CTs without any external separate environment. Hence, we can sunset the PR environment. Saving resources and costs.
  • Parallel builds are now possible as each build will not be using a common PR environment. Instead builds will be using their own spawned environments.
  • This will end the build queuing and reduce the waiting time to 0.

Reference

Ayush is a Senior software engineer in MiQ, based in our Bangalore office. He enjoys playing cricket and chess. He also loves to travel and listening to music.

--

--