Mendix tops chart in Applitools hackathon

Arjan Blok
Mar 10 · 5 min read

A few months ago, Applitools announced their online ‘Applitools Visual AI Rockstar Hackathon’. For those that do not know Applitools; it is a company that makes a toolset that can be used to do image comparison to aid with test automation. One of its unique selling points is that it uses AI to do smart comparisons which help avoid false positives. It can be used in combination with most common test automation frameworks like plain Selenium, WebdriverIO, and Cypress.

Most of the traditional test automation frameworks are geared towards testing your website’s functional correctness, not towards validating its visual correctness. To test visual correctness with these frameworks would require you to validate all CSS and DOM elements. And even then, browser rendering might still cause issues. The idea with Applitools is that a screenshot would work better than any amount of DOM and CSS validations. Not just for visual validations, but also for most functional validations. In the end, most actions on a website lead to a change in the UI.

Applitools have been making quite some noise in the testing space lately. They are also responsible for the excellent free online learning platform Test Automation University.


When we learned about the hackathon, both Corina Zaharia and me (Arjan Blok) decided to compete and learn a tool we had no previous experience with. The assignment for the hackathon consisted of 4 parts:

  1. Create automated tests that cover 5 scenarios using a “traditional” approach, using whatever automation library or framework you wanted.
  2. Create the necessary tests which cover the same 5 scenarios, but now using the Applitools toolset.
  3. Run the test you created in part 1 against an altered version of the website (v2). The tests you wrote in part 1 should now catch all differences that were introduced in version 2 of the website.
  4. Run the test cases that use Applitools against version 2 and make sure all differences are caught.

Of course, the scenarios were somewhat tailored to making the Applitools toolset shine but were also common enough to relate to them. The gist of the assignment was that when using Applitools you need a lot less code to achieve better coverage.

I know that all sounds pretty vague, so I will show you 2 examples of how the traditional approach compares to the Applitools approach.

Login Form

We had to validate that the login page looked correctly. All texts, buttons, inputs, and images had to be validated so that the differences in v2 would show after running the tests.

Login form

Of course, I started with creating a page object model for the application pages to abstract any details about the implementation. But in the end, to catch most differences between version 1 and 2 of the website, I needed quite some tests:

Login page tests traditional

While using Applitools, this was all I needed to take a screenshot which would catch all visual differences between v1 and v2:

Login page tests Applitools

Bar Chart

We had to validate that a bar chart was shown correctly and that any differences between v1 and v2 of the website were caught.

Bar chart canvas element

It turned out that the bar chart was a single DOM element; a canvas element. This basically made it impossible to interact with/read date from. In the end, what was still possible was accessing the source data that was used to render the bar chart by executing some JavaScript on the website. My tests looked like this:

Bar chart tests traditional

I also had to create some code to execute some JavaScript on the website which would return the source data. Again, while using Applitools, all that was needed was a screenshot to validate the chart:

Bar chart tests Applitools

In the traditional solution, I am also only covering the chart height, not even looking at things like chart color. That would require even more code.

If you are interested in all other scenarios, you can found our repositories here:


After submitting our solutions, we patiently waited a few weeks for the result to be announced. As it turned out, we both made it into the top 10 (out of 3000 signups). Corina even took the first place!

Mendix and Applitools

Would I recommend using Applitools for testing a Mendix App? If you are using a lot of custom styling and/or custom components Applitools can help you find potential issues that traditional automation will miss. There is also a speed increase in test development time since assertions will require a lot less code. On the other hand, since the assertions are a lot less specific and Mendix apps often go through rapid change, it will require a lot of maintenance to manage the screenshots. I guess it depends on how much you value visual perfection.


This hackathon was a fun and challenging way of getting to know Applitools. It made great use of common day-to-day scenarios to show where Applitools clearly outperforms traditional approaches in speed, simplicity, and coverage. Applitools could be useful for testing Mendix projects that contain a lot of custom components and/or have a lot of custom styling.

Mendix Community

The community-sourced publication for low-code

Arjan Blok

Written by

QA lead with 15+ years of experience in testing.

Mendix Community

The community-sourced publication for low-code

More From Medium

More from Mendix Community

More from Mendix Community

BFS: From Alice to Mendix

More from Mendix Community

More from Mendix Community

Riding the Nashorn in Mendix

More on Mendix from Mendix Community

More on Mendix from Mendix Community

Unittesting with Mendix

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade