Automating UI-Inventory (part 1)

Photo by Martin Adams on Unsplash

In this article, I will discuss the rationale behind creating an automated UI-Inventory and an example implementation. I will also present some of the hurdles that were encountered and present some of the solutions which were used.

Why create a UI-Inventory?

A UI-Inventory is a collection of the UI elements present in the application, regardless of where exactly they are located. Everyday use of the application does not really enable stakeholders to have an overview of all of the UI elements, which might be hidden in remote modules and subpages. Bringing it all together in one image is a refreshing experience, which will stimulate thinking about what can be improved and optimized from a visual point of view.

A classic example of a UI Inventory has been presented in an article called Atomic design workflow published in Smashing Magazine¹.

An example UI-Inventory

As Bob Frost wrote:

“An interface inventory is similar to a content inventory, only instead of sifting through and categorizing content, you’re taking stock of and categorizing all the components that make up your user interface. An interface inventory is a comprehensive collection of the bits and pieces that make up your user interface.

A UI inventory is meant to show everything that is right and wrong about the application UI. Bob Frost continues on to say:

“[…] it’s absolutely essential to get all members of the team to experience the pain of an inconsistent UI for them to start thinking systematically.

For the interface inventory to be as effective as possible, representatives from all disciplines responsible for the success of the site should be in a room together for the exercise. Round up the troops: UX designers, visual designers, front-end developers, back-end developers, copywriters, content strategists, project managers, business owners, QA, and any other stakeholders. The more the merrier! After all, one of the most crucial results of this exercise is to establish a shared vocabulary for everyone in the organization, and that requires input from the entire team.”

At EcoVadis, we embarked on a similar project to create a visual inventory of the UI elements used in our application. One of the early screens of the inventory (heavily inspired by the Smashing Magazine article) looked like this:

A UI-Inventory generated at EcoVadis

The same as in the original example, the inventory brings together different kinds of buttons used throughout the application. Some of them are identical in style, some are wildly different. What was striking at first glance, was the sheer variety of button types, much wider than anyone really expected. While everyday use of the application did not lead many users to notice discrepancies in button design from one page to the next, juxtaposing all of the button types on one page immediately triggered the thought: “we must do better”. That, if nothing else, is the essence of why the UI-Inventory is such a crucial endeavor.

Automating the creation of the UI Inventory

Creating the UI-Inventory could be a manual process. Sure, this could be delegated to the UI team to complete. After all, they are the most competent to know all of the interfaces present in the application they helped design, right? Sure, as a one-off effort, this makes a lot of sense. However, if this is meant more like an agile, continuous process, introducing improvements incrementally, it is less likely that it can continue relying on manual work.

Another reason to automate the work is the ability to bring in additional tools to check for visual integrity of the captured screenshots. With refactoring underway, it would always be handy to obtain additional information about any undesired layout changes resulting from any work done on the underlying UI system of shared components. Such visual regression testing can be performed with tools such as puppeteer-screenshot-tester².

So, now that we are sold on automating the process, what next? How to do it?

Finding the desired API

As always, I like to start by defining the API I would like to use as a developer. It is after all essential that developer buy-in is achieved in order that the project gets enough traction. So let’s draw up some of the requirements.

  • The configuration file should be a simple JS object
  • It should be simple where it can, yet flexible enough to accommodate more complex use-cases
  • Unequivocal identification of elements to screenshot should be provided
  • In cases where an element is not directly reachable from a simple URL, a click-stream needs to be represented, one that will allow the script to reach the desired element
  • Multiple authentication contexts need to be taken into account

Wow, that’s a lot to take into account! So let’s see how this could be achieved. First, let’s look at the simple cases, and progress as they become more complex. The configuration needs to be readable by node.js scripts, so we are going to use the exports JS module format.

1. Simple URL/ID pair

In this example, let’s try to define routes to scout in the context of a specific authenticated user, `PrimaryUser@ecovadis.com`:

This configuration tells the script to perform the following operations:

  1. Log-in as `PrimaryUser@ecovadis.com`
  2. Navigate to `company/39234/documents`
  3. Find the elements by ID: ‘#documentName-submit’ and ‘#open-upload-document-modal’ and create screenshots of those elements

This seems simple enough, but what if the script needs to execute a click-stream before reaching the desired element? Let’s try to have a configuration for that.

2. Clickstream configuration

This configuration tells the script to perform the following operations:

  1. Log-in as `PrimaryUser@ecovadis.com`
  2. Navigate to `company/39234`
  3. Click the element `#overview-company-details-button-showmore`, then after waiting for the result of the operation, click on `#activation-status-header-edit-button`
  4. After the application completes it’s actions, find the element by ID: `#activation-modal-button-confirm` and create a screenshot of this element.

Now that we have defined very specific ways to pinpoint a UI element to take a screenshot of, we need to work on the implementation of a node.js script which will be able to read this configuration and deliver the expected results. More about this in the second part of this article.

References

[1] https://www.smashingmagazine.com/atomic-design-workflow/

[2] https://github.com/burnpiro/puppeteer-screenshot-tester

Everything we do is toward one mission : Envisioning a global marketplace where sustainability influences every business decision — improving economies, people’s lives and the world we all depend on.

Recommended from Medium

The process to design a conversational interface

diagram showing the full 6-steps process in a circle: 1. Product design | 2. Scripts | 3. Tests and refinement | 4. Possible scenarios definition | 5. Testing | 6. Analytics

The confusing roles of UX

Case study: A peep into an ideal home dining experience

Dining Room Banner Image

Starting Your Design Without the Use of Colors

Converting your design system from Figma to Sketch is easy — but how?

Convert design system from Figma to Sketch

Recommendations for New UX Designers

4 design frameworks you probably haven’t heard of

A floating brain in purple hues.

The accidental tyranny of user interfaces

The loading screen on Google Docs, which gives you no indication of how well the loading process is going.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Zenobia Gawlikowska

Zenobia Gawlikowska

Frontend engineer at EcoVadis

More from Medium

Building Website MVP Single Page Application With ReactJS | Part 1

How to integrate TailwindCSS into a large-scale application

Creating Geofences with GeoJSON to Define Virtual Borders

Firebase Messaging for Web app in React with Notification sound