Pixels matter or easy UI screenshot testing in React Native

Maksym Rusynyk
ING Blog
Published in
10 min readSep 10, 2019

Developing any application requires lots of knowledge, effort, time and of course testing. But once your product reaches the customer, it doesn’t matter how well your code is organized, how much unit tests you have, or which revolutionary frameworks or libraries you are integrating. The first thing that the customer will see is the UI of your application, greatly influencing their first impression and general opinion about your application. And who knows, chances are they will dislike it and even leave your application, to never come back again, or you get less stars in the app store than expected. That is why your application has to be pixel perfect and should always look spot on. You only have one shot to impress your customer.

In this article you will find an introduction to the pixels-catcher project that can be applied to any React Native application and will enable you to do UI snapshots testing of any screen or a component. Moreover in case it is integrated with CI, it will make it possible to automatically check the quality of the screens and validate that the application is pixel perfect and matches the predefined UI expectations.

Screenshots

There are no doubts that you already know what “pixel perfect” means, but just in case you need to refresh your knowledge, you can find a great example here: The Pixel Perfect Precision Handbook. You’ll notice that most of the content is simply comparing images, like so:

This is a good example of what screenshot testing is and why it’s useful. So on the right side of the picture is a reference image or a design, while on the left side you can see the result of the actual implementation rendered by your application. In such a case it’s really easy to find out the difference and quickly understand that the feature is not ready yet, or that the change that was introduced breaks the expected layout. It can also happen that a UI solution properly works on one platform but behaves differently on another platform.

As an alternative to screenshots, you may already use jest for unit tests, which is a powerful framework for testing your application code. One of it’s possibilities is to capture a snapshot of the current state of the component and create a diff with a reference snapshot. But looking at the initial snapshot:

you are unaware about how this is actually rendered in your application, on a small screen or a large screen, on Android or on iOS, etc. As a result you need to dive into the snapshots and sources, run the application to check and understand what actually changed, how components are used, which properties are applied, and more importantly, manually confirm how it renders on different devices.

Benefits of UI testing

At first glance you might think that it will take more time to develop the application and integrate the pixels-catcher project and as a result you will not get any benefit. But let’s check what you can gain during the development process:

Initial implementation

Development of a new feature or complete application usually consists of the following stages:

  • Design and development
  • Testing (unit testing, manual, etc.)
  • Code review
  • Approvals
  • Etc.

Introducing visual UI tests means that every time you deliver a feature, like a new screen or a component, you also need to write tests that will render the screen/component in different states, sizes or even on different platforms. The output of such tests will be a reference image, which means that:

  • UI designers can quickly check the images and provide a feedback about the implementation and verify if it meets the expectations.
  • Another developer/reviewer can use the image as a first step to verify whether the pull/merge request is ready for review and visually understand the result of the change. This means that there is no need to compile the project and run the application if screens are not ready.
  • Even for the developer there are benefits, because after running the visual UI tests on different devices it is easy to get results for all of them and therefore reduce the amount of manual work without reducing the quality of the application.

Refactoring

Imagine an existing application or component that needs to be refactored. This might be a request to change a model, data layer, improve performance, etc. And probably at this stage the project already includes some e2e, unit or other tests. Depending on the change, chances are that together with the implementation code, all related tests will require some refactoring too. The only thing that will remain unchanged is the UI representation. In this case, given the screenshots are introduced and UI states defined via the reference images, all these reference images should remain the same. This means that even if the snapshot tests themselves need to be changed, the snapshot images produced should remain the same. As a result, this provides a lot of confidence that nothing is broken in the UI and the customer will not notice the change.

In another case there might be a change needed that does affect the UI. Imagine there are some globally defined font sizes in the application, for example h1, h2, etc. And there is a requirement to change the size of the h2 font. This change is expected to reflect in all app components/screens, as the global font should be reused everywhere for consistency. In this case having screenshot tests for all screens and components can highlight the introduced differences. This basically means that when some tests are not failing after the fist run, they apparently do not use this font size. This can be a case if instead of the predefined fonts, some custom fonts are used or maybe the UI is inconsistent between screens/ components. As the end result of your change, all the updated reference images should be presented in the merge/pull request and will be available for a better code review.

Development

Even during development of your application, screen or component there are a lot of benefits. One example of this is when you have a component that needs to be tested with different content; it can be some long or short text, different margin/padding, etc. In this case the benefit of having screenshot tests is that once you’ve written them, it’s possible to trigger the test again multiple times with different content, on different devices including sizes and platforms.

Another good example is implementation of localization. Assuming that the application has to support several languages, lets say five, it means that the developer has to change the language and run the app at least 5 times. But taking into account that android and iOS platforms are supported and at least one small and large device need to be tested, it will result in (5 languages) * (2 platforms) * (2 screens sizes) = 20 tests. This gets even worse if you also need to test on medium size devices; which results in (5 languages) * (2 platforms) * (3 screens sizes) = 30 tests or runs. That is way too time consuming and an enormous amount of work for the developer. But with the help of CI and/or some local scripts, all these jobs can be triggered automatically and in parallel. This means that tests results will be obtained much faster and with almost no effort from the developer.

And there are many more usecases where snapshot tests can be helpful, but for now, lets check how to use and integrate the pixels-catcher project.

Getting started

It is not a secret that each project has a budget and deadlines. Therefore, it can be useful to make use of simple but powerful tooling. Another important point is to have a transparent tool without any malicious software, that is well maintained and open for community contribution. Fortunately, pixels-catcher matches all those criteria.

Requirements

The pixels-catcher project does not have any specific requirement and can be easily integrated into any existent project. Moreover, the solution used in the pixels-catcher project allows to “hide” all native Android and iOS implementation, thanks to another open source project react-native-save-view. As a result the only required knowledge is JavaScript. There is no need for Java(android) or swift(iOS) specific experience. This means that any React Native developer is able to use it.

Integration

Pixels-catcher can be integrated to the project in the following few steps:

Install it as a development dependency with npm:

$ npm install pixels-catcher --save-dev

or yarn:

$ yarn add --dev pixels-catcher

Link react-native-save-view dependency (it is a dependency which implements native implementation to capture any react native element to base64 image):

$ react-native link react-native-save-view

Starting from RN 0.60 there is no need to link — Native Modules are now Autolinked.

Configure it in the package.json, defining a new PixelsCatcher property with the following properties:

"PixelsCatcher": {
"activityName" : "ACTIVITY_NAME",
"apkFile" : "PATH_TO_APK_FILE",
"emulatorName" : "EMULATOR_NAME",
"packageName" : "ANDROID_PACKAGE_NAME",
"snapshotsPath": "PATH_TO_SNAPSHOTS_FILES"
}

Or alternatively, you can create a pixels-catcher.json file and configure your options there.

Add tests

There are a few imports available:

import {
registerSnapshot,
runSnapshots,
Snapshot,
} from 'pixels-catcher';

Where Snapshot is an "abstract" class of the snapshot. It requires implementation of the renderContent method (an alternative to render), which should render the required component or page that will be tested. And a static property snapshotName, which defines the name of the screenshot and corresponds to the name of the reference image.

So, in a basic react native project, the implementation of the snapshot should look something like this:

class AppSnapshot extends Snapshot<*, *> {
static snapshotName = 'AppSnapshot';
renderContent() {
return <App />;
}
}

and the snapshot can be registered with:

registerSnapshot(AppSnapshot);

In the same way any React Native component can be tested:

registerSnapshot(class AppSnapshot extends Snapshot<*, *> {
static snapshotName = 'Page';
renderContent() {
return <Page />;
}
});
registerSnapshot(class AppSnapshot extends Snapshot<*, *> {
static snapshotName = 'Footer';
renderContent() {
return <Footer />;
}
});

The last step, after all required snapshots are registered, is to run all of them:

runSnapshots(PUT_YOUR_APP_NAME_HERE);

That’s all that’s required for integration and writing screenshot tests.

Run tests

To run the tests, the content of index.js file can be modified and the snapshot has to be registered instead of the App component:

import { AppRegistry } from 'react-native';
import App from './App';
const useSnapshotTest = true;
if (! useSnapshotTest) {
AppRegistry.registerComponent('app', () => App);
} else {
require('./indexSnapshot');
}

where all snapshots are implemented in indexSnapshot.js file.

Another way to register and run snapshots is by specifying the indexSnapshot.js file as an entry file. If you need more information about this you can check the demo project project.

After that start the server:

./node_modules/.bin/pixels-catcher dev

And run the application as usual, using react-native run-ios or react-native run-android commands.

As soon as the application is started, all snapshots will be rendered one by one and results will be reported to the local server, using the base64 data of the captured images. The final report will be printed to the console and all results will be stored to the folder, specified in the PixelsCatcher.snapshotsPath, where:

  • uploads - are the actual results
  • refImages - are your reference images
  • diffs - are the differences, highlighted with red pixels

After the first run of the snapshots, all tests will fail. That is because there is no reference images yet. To fix this, check the uploads folder which will contain all the results. So in case of the App component, the result will be:

And if the current result matches the expectations, move the files from uploads folder to refImages folder and restart your tests. Now all the test will pass.

If some tests are failing, it can be useful to check the diffs folder, which contains images that highlight the mismatch between reference image and actual result. For example, the result can be:

where mismatch is highlighted with red color (in this case it is One, then and Help). It can be a new colour, a changed size or maybe it's position was shifted a bit, etc. And this is the exact place where the change is and where it has to be thoroughly checked if this is the expected result. The difference is calculated using the pixelmatch.

Conclusion

The solution described above shows that it’s quite easy to integrate and use pixels-catcher project with any existing react native application and as a result get more control over the development process, be more confident with the changes before the application is released and so on. In case of CI integration it is possible to get additional checks and fast feedback. But the most important is that the application will stay pixels perfect and will keep attracting users.

Thanks for reading! And if you would like to try it out, you can check the demo project that includes a working example.

--

--