Automating WebGL Testing, easy

Santi Roca
6 min readSep 23, 2018
Yosemite park, Yosemite Firefall Event.

I’ve been working a 3D/2D Viewer for about four years now. The test approach that we were following was rather manual, except for some particular, complex functions. It was just a matter of time before manual testing would get in the way of speed of development. Don’t get me wrong, manual testing is a cruxial part of the development process. Tools are there to play their part of a bigger play.

“Tools are there to play their role on a bigger play.”

When a new project, involving both Viewers, was thrown to the table, the idea of a new layer of functionalities on top of the existing ones, kept echoing in my head.

TL;DR

Writting automation tests with Gherkin syntax. Using Jest to write the steps. Storing samples of the correct execution. Running new instances of the test, compared to the stored images. Be careful with image proportions, keep the same browser size, and compare grayscale images.

Setting up the environment

The first obvious desition was, how were we going to run the test cases? The again, obvious choice, was to use Jest. Yet simple, Jest provides the “off-the-shelf” configuration that makes it so hard not go with it.

npm init
npm install --save-dev \
jest \
gherkin-easy \
node-resemble-js \
sharp \
selenium-webdriver \
jest-puppeteer \
gherkin-jest \
puppeteer

So far, you have configured, and installed, a new package.json with all the dependencies that I used to run the tests. Yup, thats all…

Now, just adding a bit of Jest configuration at the bottom of your package.json will do the trick, so that, every time you run the jest command, the preset and transformation engine, will be automatically picked up from this configuration file. Just add this two configurations to your existing package.json file.

Now, lets follow with the folder hiearchy that we have choosen. Given the fact that we were going to need two sets of tests, those that store the image, on the correct execution the test, and those that compare it with the old one to ensure consistency of the feature over time, we decided to build the following directory tree.

root
|-- specs
|-- build
|-- test
|-- cases
|--lib
|-- samples

The first folder (specs) will hold all the information related to each test case. The first 2 subfolders will hold almost identical code, except for the last step of the test case, which for the build phase, will store the image as a sample, and for the test case, will compare the output with the stored image, of the first correct execution of the test case. The last subfolder, will have the feature definitions, in gherkin mode, with each of the planned scenarios.

The lib folder will have a couple of helpers classes, that will allow us to start our browser instance, to run the steps, to store the images, to compare them, etc.

Finally, the samples folder, will store the first execution of a test, that will, in the future, be used as reference to ensure the consistency of the feature, over time.

This is the part in which we write the important stuff

To keep things simple, I’ll add the code of those helper classes, with a brief description of what it does, why We ended up doing that, and how you should use it.

Image helper will allow you to compare two images, withou having to worry about color schema and image dimensions. This could, for sure, be enourmously improved, but as far as our project concerned, worked correctly in all of our test cases. It allows us to send, as a first parameter, a string, with image name, or a buffer, with it’s content. As a second parameter it only receives a buffer, which will be very handy for the upcoming step definition.

Given the fact that most of the interactions that you would perform on a WebGL application, will involve transformations, and those are not highly precise among consecutive executions, we are allowing the test to pass with a threshold of 12.5% of mismatch. We’re also helping the acuracy of the test, by resizing the target image, but you should make sure that your environment remains the same, for the same test. Also, testing with grayscale, allows you to focus on transformation, rather than color schemas. You can twist this if you need to automate color-based interactions.

Create a new file called ImageHelper.js and place it inside ./lib/ folder.

The AutomationFramework file is very straightforward. It just uses Selenium webdriver to manipulate the DOM Elements. I’ve just selected the Chrome Browser since is the only browser that we’re targeting now (weird, I know. There’s a really strong reason why). The methods will just encapsulate selenium functionalities on a Promise, that will allow us to run asynchronous steps.

Create the AutomationFramework.js file inside the ./lib folder.

This is the part where the problems, kindly, arised

If you try to run those, you will first notice that you are still missing the chromedriver. On the root folder of your folder, you need to add the chromedriver executable. Depending on your OS, you must download the zip file, and uncompress it on the root folder, along with the package.json. I would recommend to use the version 2.29, which is the one that we, successfuly, were able to run.

Chrome webdriver 2.29

Also, make sure that your webgl context is configured with the preserveDrawingBuffer to true. This will ensure that the context does not swap buffers, as soon as is ready rendering to the canvas, thus, making it available for you to fetch. Otherwise, you will get a black image matching your canvas’ dimensions.

const context = canvas.getContext("webgl", {
preserveDrawingBuffer: true,
})

This is the part where I drove the entire QA Team Crazy

The next step would be rather simple. We are going to create 3 more files. I’m going to assume that you already have a WebGL application that you would like to try. To keep things simple, I’ll cover the server setup, reports and tier-down, in a separate post. For now, we are just going to use one of those beautiful demos in which a exiting cube spins around it’s own center.

The first file that Im going to write is the feature. Inside specs/cases we are going to create a new file called “when_i_load_the_page_the_cube_should_spin.feature”.

Feature: Cube Rotation   

Scenario Outline: Cube should rotate automatically
Given that I am on the home view
When I wait for <time> ms
Then the cube should rotate automatically
Examples:
| id | time |
| 1 | 1000 |
| 2 | 2000 |
| 3 | 3000 |
| 4 | 4000 |

Next, we are going to define two more files that will look almost the same, except for the last step. The first one, will go into specs/build and will be in charge of creating the sample image, that you will use in the future as reference to ensure consistency.

The second, will go into specs/test and will be in charge of running everytime, to compare the latest version of your product, with the sample image, store on the first run.

This is the part where the magic happens

You should now, have all the environment ready to test. You have configured your package.json with two new commands, test:build and test:run.

The build command, will create the sample images. Keep in mind that this command will look for all the matching criterias, so, if you are not looking to build all the samples again, you should go for a particular file instead. Also, keep in mind that, once created, a sample should never be created again, since this will remove the original evidence of correct working.

npm run-script test:build

After this command has succesfully run, you can go to the samples folder to find all of your stored images, that will be used in the future to test agains new data.

The next command will just create the samples for that particular case.

jest specs/build/when_i_load_the_page_the_spehere_should_spin.js

With test:run you are now able to run the test. You could add this to any build process as the automated solution to test the inner workins of your WebGL application.

npm run-script test:run

This is the part where I pretend to be smart by making a Conclusion

Automated testing is not the answer to every question, but it certainly adds it’s value reducing manual testing and allowing human resources to focus in different, less automatizable, tasks.

Automating graphics applications takes a lot of work and presition, but, when executed correctly, it turns out to be pretty much the same than any other automated testing on any platform. Plus, looks pretty cool!

--

--