Visual Regression Testing (how you can move fast and don’t break things)

Ahmed Mahmoud
4 min readJul 1, 2017

--

How many times you wanted to change code (especially CSS) and you were afraid to death that you’d break everything.

So, you tried to be more specific (in terms of elements selectors), which will backfire at you one day, or at the same day, as you’ll conflict with other breakpoints variations and different variations of the same component.

There are a lot of approaches to achieve “flat specificity” in your code following BEM or Maintainable CSS approaches, which makes your code far more organized, readable, maintainable, and you’ll get rid of the specificity hell, but that doesn’t guarantee that after that changes you’ve done everything is fine and nothing has broken.

Let’s assume that you’d do that assertion (of making sure that everything is fine and nothing has broken), what would you do? Well, if I were you I’d do the following:

  1. List all pages/routes in my app.
  2. List the main breakpoints/viewports.
  3. List different scenarios that can be done on each page.
  4. Manually open each page on each of the listed breakpoints and go through each of the available scenarios, and make sure nothing has broken, which is super time consuming and error prone.

Solution:

What if I told you that you can automate all the previous steps with one command npm test, it tells you after it finishes whether everything is fine so you can go ahead with your deployment with confidence, or things do look different on these pages and gives you the visual difference, so you can fix it before it goes to QA, or in most cases to live.

Let’s see it in action:

I’ve made two dummy samples on jsbin, just for demo purposes https://jsbin.com/yutoho, https://jsbin.com/toruge, and let’s assume I’ll be testing for these two breakpoints 1440px and 320px.

I’ll create “index.js” file inside “test” folder
mkdir test && cd test && subl index.js

Tools used:

  1. PhantomJs (headless browser to capture screenshots of the site at different breakpoints and evaluate scripts to simulate different scenarios).
  2. Image-diff (to diff between expected/reference screenshots and current ones)

Full Code:

test/index.js
test/create-screenshot.js
package.json

First, you add all urls you want to test to the urls array.

Then add all breakpoints you want to test at to the breakpoints array.

When you’re executing the npm test for the first time, so you don’t have “expected” folder inside your “test” folder with the reference images, it’ll ask you if you want to create them first.

Then it’ll create all of them inside “expected” folder.

Assume you made some changes and you want to test, so, you executed npm test

It created another folder “new” to compare them against the images in “expected” folder, and export the difference in “diff” folder, but in this case everything is fine, so will see this.

We have this code in https://jsbin.com/yutoho

Let’s change them to this

Let’s test now, and see what we’ll get

Expected image
New image
diff image

It caught the issues, which -in case you noticed- is hard to be noticed by naked eye.

Enhancements:

I could’ve made some ES6 enhancements like using let and const instead of var, arrow functions instead of traditional function, and template literal which is more readable than addition of strings and variables, but I wanted to make it as simple as possible, so it just get to the point.

Alternatives:

  • Percy (starts at $149/month)

One last thing…

If you liked this article, click the💚 below so other people will see it here on Medium.

--

--