Practical guide to perform visual testing in CodeceptJS

Vitalii Sotnichenko
Byborg Engineering
Published in
10 min readDec 21, 2020

Applitools

Setup and configuration in CodeceptJS

1 Create an account at Applitools. You can use either your GitHub account or email address to sign up.

2 Install follow npm dependencies

3 Add the Applitools helper to the “Helpers” section in the CodeceptJS configuration file

Please provide the following info to fully utilize the helper:

  • applitoolsKey (Required): You can find your API key in the user menu on the right side of the test manager toolbar under “My API Key button”
  • windowSize (Optional): if not provided, the default size will be 1920x600. WindowSize will always follow this precedence: ApplitoolsHelper, Webdriver.
  • appName (Optional): you can set your desired application name, if no name is provided, the default name will be: Application Under Test.

Correct usage

We have a simple test below, demonstrating how we make a screenshot of the home page.

We can give eyeCheck 3 parameters:
- pageName — the name of the page you want to check
- uniqueId — a unique id to combine tests into a batch
- matchLevel — set the match level. Possible values: Extract, Strict, Content, Layout

Our application will conduct the specified search when we run our test. Eyes will make a screenshot and send it to the Applitools cloud. The image will be validated using an AI comparison. The first time we run this test, we set a new baseline image for future tests.

The Applitools Test Manager dashboard highlights all our visual tests and related results. Our new tests are always marked as “new”.

Since it’s our first run and we haven’t stored a base image yet, the baseline image section will be empty and we must make sure that any potential baseline images correspond to our expectations before we approve it as a baseline image.

We need to approve the image as a baseline image by clicking on the “thumbs up” icon, or reject it with the “thumbs down” icon.

Let’s run our test once more to compare a new image with our baseline image. You’ll notice a new image was added to the list as “unresolved”, which means there is a discrepancy between 2 images.

The “unresolved” status allows us to review each screenshot to determine if the test failed or if the baseline image needs to be updated. Discrepancies will always be highlighted in pink on the image.

If you agree with the changes, just click on the “thumbs up” icon to save the most recent image as the baseline image. Otherwise, you can simply reject it by clicking on the “thumbs down” icon and pressing “save”. The test is now marked as “failed”.

Applitools provides us with various comparison levels so users have flexible options for validation.

Match levels:

  • Exact — a pixel by pixel comparison of the images. This level is unreliable, because using this match level compares each pixel between the baseline image and a new image. In different environments, fonts and color can differ slightly.
  • Strict — AI compares the two images and highlights differences that the human eye can see
  • Content — only compares content and doesn’t take color differences into account
  • Layout — only verifies the layout of the images. Applitools will ignore all differences except layout changes. This helps especially when you have a lot of dynamic data and you want to make sure the structure is correct.

Let’s see what a test with a “Layout” match level looks like.

Make sure that the dashboard value for match levels were changed from “Strict” to “Layout”. Despite the discrepancy between images, the test is successful because the structure is the same.

Visual Regression Tracker

Setup:

1 install docker

2 copy docker-compose file using next curl command

$ curl https://raw.githubusercontent.com/Visual-Regression-Tracker/Visual-Regression-Tracker/master/docker-compose.yml -o docker-compose.yml

Docker-compose consists of 4 services:

  • UI
  • API
  • Postgres database
  • migration

3 Copy .env file using curl command

curl https://raw.githubusercontent.com/Visual-Regression-Tracker/Visual-Regression-Tracker/master/.env -o .env

4 Start the docker-compose service “docker-compose up” and wait until the database is created

Frontend is available for new users on http://localhost:8080 by default

You can log in to the dashboard with the details you get from the console after uploading the docker-compose file. Once you are logged in, you can see the default project with two main options: builds (where your tests will be available) and variations (the branch system).

Integration with CodeceptJS

1 Install the visual-regression-tracker/agent-codeceptjs package

npm install @visual-regression-tracker/agent-codeceptjs

2 Add VisualRegressionTrackerHelper to codecept.conf.js’s file. We can get the data we need to fill the config file either from our profile on the web interface, including our API key, or from the logs while running docker-compose.

Then, you can simply copy and paste them to the “Helpers” section in the CodeceptJS config file.

Correct usage

Before we start executing the test, we need to run the “I.vrtStart()” start command. After the test, we must run the I.vrtStop() command.

By using the I.vrtTrack() command, we can create a screenshot. We also set track options inside this function:

  • os
  • device
  • browser
  • diffTolerancePercent
  • ignoreAreas array

An example of a test using Visual Regression Tracker:

Because we have not set a baseline image yet, we will get an error when running a test for the first time.

Next, we should open the web interface and see what we are working with. This is a pretty cool interface with 2 sections for images — one for the baseline image and another for the new image. Apart from this detail, our environment is also indicated at the top of the page (OS, device, browser, etc.) alongside the „Diff” and „Diff tolerance” values.

The details I like about VRT are the 2 buttons: “Approve” and “Reject”. By clicking on “Approve”, we accept the newest image and set it as our new baseline image, which will change the status from “New” to “Approved”. If we do not agree with this, we should select the “Reject” option. Unlike with ResemblerJS, we don’t need to store the images in our repository when using this tool, because all images are stored in the Visual Regression Tracker.

Let’s run our test again, this time without any errors!

After opening the web interface, we can see 2 images — the baseline and new image. Because they pass the pixel image comparison, the test is a success!

Now, let’s try to fail the next test by slightly changing the CSS code. After setting the “diff”, let’s run the test again. The test failed because of a difference found error.

After opening the UI interface, we can see that the status of the test changed to “Unresolved” and the differences between the 2 images highlighted in red.

Both images will move if the user drags and drops them. This can help with reviewing the entire image. We will also get the option to switch between views, allowing us to view the image comparison either with or without the differences highlighted.
An essential part of this tool as well as a critical functionality of visual testing is the “Ignore areas” option. By selecting a region on the new image, we can command the tool to ignore discrepancies between the baseline and new images, especially if these areas are related to dynamic content, or we know of layout changes.

Now, let’s save our changes and run the test again. Make sure our test is successfully ignoring our highlighted regions. To delete any highlighted areas, simply click on the recycle bin icon after selecting a region.

The tool also supports version history, so we’re able to see how our image has changed through versions.

By applying a tolerance level field, we can avoid slight discrepancies in pixels returning as failed tests. Let’s set the diffTolerancePercent to 10% and verify what the UI will display.

The UI will display the difftolerance and diffpercentage values. Because the difftolerance value was set to 10%, the test will be a success even though there is a 0.13% discrepancy between images.

ResembleJS

Using ResembleJS in CodeceptJS

Setup and configuration with CodeceptJS

1 Install the codeceptjs-resemblehelper package

2 Add Resembler helper to the Helpers section in the CodeceptJS configuration file

Users must provide 3 parameters to use the helper:

  • screenshotFolder: This will always have the same value as the output of the CodeceptJS configuration. This is the folder where WebDriver saves a screenshot when using the I.saveScreenshot method
  • baseFolder: This is the folder for base images, which will be used for comparisons
  • diffFolder: This is the folder where Resembler will store the comparison images that can be viewed later

Codeceptjs-resemblehelper comes with two major functions:

  1. SeeVisualDiff — this compares 2 images to calculate a misMatch percentage based on 2 parameters — baseImage and options. BaseImage is simply the baseline image’s name, while we can define the details of options, such as the tolerance level. The ResembleJS helper takes the tolerance level into account when testing.
async seeVisualDiff(baseImage, options) {
await this._assertVisualDiff(undefined, baseImage, {prepareBaseImage: true, tolerance: 5})
}

2. seeVisualDiffForElement — this compares the elements on 2 separate images to calculate the mismatch percentage. This is important when working with dynamic data and comparing individual regions or parts of an application. It is based on 3 parameters:

  • selector — it could be CSS, XPATH or ID
  • base image
  • options

By using a combination of these 2 tools we can use CodeceptJS to take a screenshot and ResembleJS to compare the images and report the differences.

3 Create base image

We should push the prepareBaseImage: true to I.seeVisualDiff function to create a base image.

4 Add tests and run them —try to save the screenshot and compare it to the base images Make sure you disable the prepareBaseImage value (prepareBaseImage: false)

The test will be successful because our base image and current image are the same and no new images will be created in our project directories.

Let’s change our HTML code to fail our test and then run the test again with an image comparison.

Our test will fail because of the high value of the mismatch percentage. As we can see, there is an 86% mismatch with an expected discrepancy no larger than 2% (mentioned in the tolerance level option).

The test also created a diff folder where it stores the diff image. When we open the image, we see the sections highlighted in pink are mismatched sections.

If we want to ignore certain parts of the page and focus on specific regions, we should use the second ResembleJS function “I.seeVisualDiffForElement”.

For example; our page features dynamic content that we don’t wish to test, and we only want to focus on a specific part of the page. Instead of using I.seeVisualDiff, we should use I.seeVisualDiffForElement and insert the selector as the first argument, but leave other arguments the same.

Let’s change the label for one of the menu items to fail the test, set up a tolerance of 0, and verify the mismatch percentage.

We can see that the mismatch percentage is equal to 0.03. This means that if we keep the tolerance level’s value at 1, the test will pass despite the difference between images.

Here, you will find the menu element highlighted in pink is the only difference from the baseline image. You can see that the helper completely ignores the rest of the page and instead only focuses on the header element.

It’s possible to run both comparison strategies inside the same test with individual tolerance levels.

Conclusion:

  • CodeceptJS has a wide range of tools that can be integrated to perform visual testing
  • Some of the tools’ UI is perfect to manipulate images, including choosing which approach to testing we wish to follow.
  • We can run Visual Regression Tracker internally to avoid legal and security issues.

--

--