How to distribute the selenium test execution using Docker?
Why we need to distributed testing -
When it comes to functional test automation in the application in a web browser, Selenium is a very powerful automation test tool and is rapidly adopted on large scale and gaining popularity. And most important thing selenium based automation supports a wide variety of technology stacks, and also capable of integration with many other tools.
Theoretically, it sounds great, but there is a challenge for many organizations at a loss for how to properly implement the distributed testing. The objective of distributed testing is to run the multiple test parallel with multiple environments and can reduce the test execution time to achieve cross-platform coverage.
If you are looking to run your tests on a separate machine or distribute the tests across multiple machines then you will need to use Selenium Standalone Server or Selenium Grid (to distribute test across different machines/virtual machines), it acts as a hub for distributing Selenium tests to remote nodes.
What is Selenium Grid
Selenium Grid is a feature in Selenium which allows the execution of WebDriver scripts on virtual or real machines across different platforms. It has a simple architecture: A hub and one or more nodes. You run your test across the hub and the hub distributes the test across different browsers.
Advantages of Selenium Grid:
- It reduces execution time with distributed testing.
- It allows cross-browser and platform testing.
- When a node is free, it automatically picks up the test case waiting in the execution queue.
Problems of Selenium Grid:
- Hard to configure (Installing Java, downloading selenium standalone server, installing related browsers and drivers, etc.)
- Hard to manage (testing different versions of different browsers)
- The main problem of the Selenium Grid is Resource usage.
What is Docker Selenium?
It uses the same architecture as Selenium Grid but each component, hub, and node, is a separate container.
What are the advantages of Docker Selenium?
- We got rid of all the dependencies are mentioned above.
- Does not use system resources like VM’s so it’s lightweight.
- We can easily manage all the version complexity with the Docker registry.
What is Docker?
Docker is an open-source system and is similar to a virtual machine which consists of Docker containers. By using the container, we can be able to develop, ship, and run the applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. Containers allow developers to package up an application with all the parts it needs, such as libraries and other dependencies and ship it all out as one package.
These packages, or containerized software, will run on any other machine, regardless of the environment and customized settings. Docker is available for both Linux and Windows-based apps.
Installing Docker
The first thing you’ll want to do to download the Docker and installing it. You can download the Docker binaries from:
Steps to be followed when installing Docker:
- Once the binaries are downloaded, install Docker.
- Turn ON the virtualization in the local machine.
- Open the Docker and click on the checkbox & expose Daemon on TCP://localhost. This allows making yourself vulnerable to remote execution attacks.
Once installed, you should see the Docker icon running in your Taskbar (Windows) or Menubar (macOS). I’ll be using macOS for this post.
Using Docker with Selenium:
Once you’ve got Docker up and running, SeleniumHQ maintains a whole range of Docker images that you can pull down and start using right away.
The list of images is available at Docker Hub or you can browse the project repository on GitHub. The GitHub repository also provides some resources, and you can submit issues if you come across anything.
The images are roughly split into:
- Standalone — Images that create a standalone Selenium server. You’ll only be able to run one of these at a time on your local machine or the port (4444) will conflict.
- Hub — Image that creates a central Selenium server in grid configuration.
- Node — Images that are used in conjunction with the “Hub” image to create a Selenium grid. You can start multiple node containers that connect to your Hub image.
- Base — Images that you can you use to build your own images
Getting a Standalone Server Up and Running
Open the terminal and run the following command to get a standalone selenium server working:
$ docker run -d -p 4444:4444 selenium/standalone-chrome:3.4.0
- -d runs the container in the background (detached)
- -p 4444:4444 maps the local port 4444 to the port 4444 used by the Selenium Server in the container
- :latest is tag/version of the image to use
The above will get the latest released image for the Chrome Standalone Server. Alternatively, you can specify a specific version by using the relevant tag. For example:
$ docker run -d -p 4444:4444 selenium/standalone-chrome:3.4.0
To use Firefox, instead, use:
$ docker run -d -p 4444:4444 selenium/standalone-firefox:3.4.0
Once you run the command, Docker will download the image and run the container straight away:
You can then use docker ps to check the container is running:
Note: The GitHub repository offers some additional recommended options for how to run Chrome and Firefox in an optimal configuration in this README.md. For example, when executing docker run for an image with Chrome browser, it’s recommended you add a volume mount -v /dev/shm:/dev/shm to use the host’s shared memory:
$ docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:3.4.0
Running a Grid:
Setting up a Selenium Grid only requires a few different steps. To start a small grid with 1 Chrome and 1 Firefox node you can run the following commands:
$ docker run -d -p 4444:4444 — name selenium-hub selenium/hub:3.4.0
$ docker run -d — link selenium-hub:hub selenium/node-chrome:3.4.0
$ docker run -d — link selenium-hub:hub selenium/node-firefox:3.4.0
- — name assigns a specific name, “selenium-hub”, to the Hub container
- — link links one container to another
Once you have run the above commands and the images have been downloaded and started, when you run the docker ps command you should see something like this:
Unlike the standalone server, when Selenium is used in a grid configuration you can access a console that displays the nodes and browsers connected to the grid.
Using your preferred browser, go to http://localhost:4444/grid/console to bring up the console and confirm that you do, in fact, have a grid set up and running with single Chrome and Firefox node.
Of course, if you want to add more nodes to your grid you can simply repeat the commands to add individual nodes as needed. For example:
$ docker run -d — link selenium-hub:hub selenium/node-chrome:3.4.0
$ docker run -d — link selenium-hub:hub selenium/node-chrome:3.4.0
$ docker run -d — link selenium-hub:hub selenium/node-chrome:3.4.0
Testing the Containers Out:
Using either the Standalone or Grid containers is as simple as pointing your test code/runner/scripts towards:
If you are familiar with or use php-webdriver, you would simply update your host configuration to point to the above address. For example:
Accessing Local Development Sites
If you do plan on using the Docker containers to access local development sites, there are some additional considerations.
If you are using Windows or Linux, you can simply use the — net=host option when running the containers.
For example: $ docker run — net=host selenium/standalone-chrome
On macOS, however, it’s a little trickier due to the way the Docker app itself works (the above option just doesn’t work the same or as expected).
The easiest workarounds I’ve found are:
- Use your local host local network IP address for the URL i.e. http://192.168.1,168. This often isn’t ideal especially if you have a few sites.
- Use the special macOS-only DNS name available in Docker for macOS from 17.03 onward docker.for.mac.localhost, which will resolve to the internal IP address used by the host. Again, this likely won’t be ideal if you have a few sites or just don’t like the address.
- Use the — add-host option to add your local test site to the Docker hosts/containers e.g. — add-host store.localhost:192.168.1.191, which you would use when running the standalone or node containers like so: docker run -d — link selenium-hub:hub — add-host store.localhost:192.168.1.191 selenium/node-firefox:3.4.0.
- Configure and use a DNS server. You can use the — dns option to update the Docker containers to use a specific DNS server e.g. docker run -d — dns 54.252.183.4 — link selenium-hub:hub selenium/node-chrome:3.4.0.
Using the Compose tool for defining and running the multi-container Docker Application:
We can use a YAML file to configure our application’s services. Then, with a single command, we create and start all the services from your configuration.
Compose works in all environments: production, staging, development, testing, as well as CI workflows.
Using Compose is basically a three-step process:
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run docker-compose up and Compose starts and runs your entire app.
A docker-compose.yml looks like this:
References: