When there aren’t users…

Karl Reitzig
The Designer’s Toolbox
6 min readSep 16, 2021
An empty park.
Photo by Hans Eiskonen on Unsplash

Regular evaluation has always been at the core of User Experience design and has been the essential component that empowers teams to create more successful and user-centred products.

These evaluations can be especially beneficial when we have real users to inform and guide our designs. But, that usually requires a lot of resources which means that we usually can’t do it as often as we’d like to.

So, should we save all evaluations till the end? Or stop them completely?

Ultimately, it will always cost less to do an evaluation than a tedious redesign or perhaps a complete failure in the marketplace. But, in this article, I want to highlight 5 essential Predictive Evaluation methods that can help us keep a users-centred perspective—even when there aren’t users around to help us.

Predictive Evaluation aims to predict how users will experience different aspects of an interface based on principles and best practices derived from research in areas like Human-Computer Interaction (HCI) and Cognitive Science.

There are inherent risks to Predictive Evaluations because, as the name suggests, it’s a prediction. Meaning that, besides being riddled with biases, this type of evaluation doesn’t offer any empirical data.

But, predictive evaluation is often cheap and can give us access to important insights earlier and more frequently.

1. HEURISTIC EVALUATION

Evaluating a checklist.
Photo by Glenn Carstens-Peters on Unsplash
  • Application: Early-stage to Post-launch
  • Goal: Perceived usability.
  • The Pros: Offers actionable insights.
  • The Cons: Focused on individual components and can’t evaluate tasks holistically.

To begin, let's look at Heuristic Evaluation, originally popularised by User-Centred Design (UCD) experts like Don Norman and Jakob Nielsen.

This method, like the others, is rooted in a deep understanding of HCI principles that give us a lens through which we can scrutinize a design.

In a Heuristic Evaluation, a UX designer carefully evaluates different areas of the interface by referencing various usability principles such as affordances, consistency, and feedback, which are laid out by Norman, Nielsen, and other organizations like the Centre for Excellence in Universal Design to predict the usability and accessibility of a design.

The versatility and efficiency of this method explain why Heuristic Evaluation has become such a popular subject with employers and social media.

2. COGNITIVE WALKTHROUGHS

Two people discussing the designs on a laptop.
Photo by John Schnobrich on Unsplash
  • Application: Early-stage to Post-launch
  • Goal: Optimizing the feedback cycles of a task.
  • The Pros: Offers a holistic perspective.
  • The Cons: Requires an outside perspective to prevent biases.

Did your teacher ever suggest going over your test before you handed it in? It probably saved you from slip-ups several times, right? Well, a cognitive walkthrough isn’t any different.

The elements in a design often work well on their own, but as we put them together they can easily end up hindering the overall usability. That’s because there has to be a feedback cycle of input and output integrated into the core of each interaction.

An interface is only a tool to facilitate effective “dialogue” between the user and the task. So, while keeping essential the HCI principles of effective feedback cycles in mind, we can go through our designs from the user’s perspective to ensure that the dialogue makes sense and that there is an appropriate response from the interface for each interaction.

3. MODEL-BASED EVALUATION

A pilot’s cockpit.
Photo by Dan Lohmar on Unsplash
  • Application: Mid-stage to Post-launch
  • Goal: Efficiency of the user flow.
  • The Pros: Addresses the IA and ensures a more efficient interface.
  • The Cons: Very little emphasis on the contextual elements.

Creating visual representations of the operations that a user takes to accomplish a goal has been a popular way to compare various complex digital and analogue interfaces for many years. Several methods exist that aim to address the various roles or “views” of the users.

A common way to evaluate the efficiency of an interface like a pilot’s cockpit, for example, is with the GOMS family of models.
We can adopt a Human Information Processor view by comparing each action a user has to take in order to assess which interface is faster — and therefore simpler. But this model doesn’t account for the user’s context…

HCI principles like activity theory and situated-action models teach us that a user’s actions originate out of many contextual factors that need to be considered for an accurate representation of the activity.
Therefore, models like Cognitive Task Analyses and Hierarchical Task Analyses take up a Predictor and Participant view of the user to introduce certain aspects of the user’s cognition and general contextual elements too.

This method of predictive evaluation is very specific and requires a lot of expertise, but it provides an effective way to evaluate tasks from a high-level perspective.

4. ERRORS AND REVIEWS

Instagram app-store reviews.
Photo by Obi Onyeador on Unsplash
  • Application: Post-launch
  • Goal: User pain points.
  • The Pros: Authentic user-generated data.
  • The Cons: Only available after the user has failed.

A great way to evaluate the users’ experiences without conducting an expensive study is to look at what users say about our products once they go live. This is usually fairly easy to do with analytics software or insights from crash reports and app store reviews. But, although the data is user-generated, it still requires predictive conclusions.

Meaning that we can learn what happened, but learning why it happened will require speculation and further research.

This type of research is very valuable to developers though and often provides interesting insights into our users’ needs and pain points.

5. SIMULATION-BASED EVALUATION

A robot doing a task on a futaristic screen.
Photo by Eli Alvarez on Unsplash
  • Application: Mid-stage to Post-launch
  • Goal: Perceived usability.
  • The Pros: Reliable feedback on accessibility.
  • The Cons: Low-detail feedback.

As AI technology evolves, we are seeing more and more software aimed at evaluating interfaces both at the design and development stages.

Google have used this technology for a long time in their analytics and SEO software. And companies like Stark offer software that we can use to evaluate a design’s perceived visual accessibility. These are only a few of many AI tools that are being developed to help us make more inclusive and usable products.

But, these tools can become expensive. The more complex products often require licences and since AI is still in its infancy at predicting human behaviour, they might not offer enough reliable insights to be worthwhile for in-depth evaluations.

Thanks to decades of research in HCI and UCD, we have access to high-quality information that we can apply to various methods of predictive evaluation.
And although these methods can’t substitute the valuable insights gained from real people interacting with our designs, they are effective ways to maintain a user-centric perspective even when there aren’t users.

Let us know how you apply Predictive Evaluations in your specific area of application!

Follow me on LinkedIn for more design and tech-related content, or learn more about me through the links in my bio.

--

--

Karl Reitzig
The Designer’s Toolbox

I’m a South African UX designer sharing my thoughts on digital design and tech.