Heuristic Evaluation: An Introduction

Angie Sinanaj
MyTake
Published in
3 min readOct 15, 2019

There are many different ways to evaluate software, but I’m going to talk about today a technique called Heuristic Evaluation, created about twenty years ago by Jakob Nielsen and colleagues.

The basic idea of heuristic evaluation is that you’re going to provide a set of people — often other stakeholders on the design team or outside design experts — with a set of heuristics or principles, and they’re going to use those to look for problems in your design.

Each of them is first going to do this independently and so they’ll walk through a variety of tasks using your design to look for these bugs.

Different evaluators are going to find different problems and then they’re going to communicate and talk together only at the end.

This is a technique that you can use, either on a working user interface or on sketches of user interfaces and it works really well in conjunction with paper prototypes and other rapid, low-fidelity techniques that you may use to get your design ideas out quickly and fast.

Neilsen’s ten heuristics are a pretty darn good set. They do a pretty good job of covering many of the problems that you’ll see in many user interfaces, but you can add on any that you want and get rid of any that aren’t appropriate for your system.

Neilsen’s 10 Heuristics

Give your evaluators a couple of tasks to use your design for, and have them do each task, stepping through carefully several times. When they’re doing this, they’re going to keep the list of usability principles as a reminder of things to pay attention to.

Now which principles will you use?

I think Nielsen’s ten heuristics are a fantastic start, and you can augment those with anything else that’s relevant to your domain.

Obviously, the important part is that you’re going to take what you learn from these evaluators and use those violations of the heuristics as a way of fixing problems and redesigning.

In this process, you might want to have multiple evaluators rather than just one because an evaluator cannot find all the problems, while more evaluators find more problems.

It’s of course going to depend on the user interface that you’re working with, how much you’re paying people, how much time is involved — all sorts of factors.

Jakob Nielsen’s rule of thumb for heuristic evaluation is that three to five people tend to work pretty well, and that’s been my experience too.

If we compare heuristic evaluation and user testing, one of the things that we see is that heuristic evaluation can often be a lot faster — It takes just an hour or two for an evaluator — and the mechanics of getting a user test up and running can take longer, not even accounting for the fact that you may have to build software.

Also, the heuristic evaluation results come pre-interpreted because your evaluators are directly providing you with problems and things to fix, and so it saves you the time of having to infer from the usability tests what might be the problem or solution.

Now conversely, experts walking through your system can generate false positives that wouldn’t actually happen in a real environment and this indeed does happen, so user testing is, sort of, by definition going to be more accurate.

Personally, I think it’s valuable to alternate methods:

With user evaluation and user testing, you’ll find different problems, and by running HE early in the design process, you’ll avoid wasting real users that you may bring in later on.

Thank you for reading!

I hope I have provided a useful introduction :)

--

--