Lean User Testing

„We dont have time for that“. No time, no budget and no resources is the default scenario we all face in our project or product development, let’s be honest. We all have to deal with that situation and it is the best precondition to start with a lean way of user testing.

One of the main reasons I noticed, why user testing is often done once and then never again is that the analyze phase of the results can be really time consuming. Amazing presentations, scientific discussions and bringing the results back to the backlog in a late stage are killers for user tests in a common environment of a typical IT company.

So let’s think about, how we can reduce the efforts of the analyze phase and concentrate on the initial goal — improving our solution. The right relation between what we want to know, and what we don’t want to know is a key here. To apply a real LEAN way of user testing, we need to find a quick way to understand what does NOT work. Test results need a filter, that hide all tasks which already work. Because this is not what we need to improve. Lean User Testing focuses exactly on what does not work — and allows us to iterate on these issues way faster.

A standardized structure

An unclear structure within your questions and tasks is the first way to fall into the trap of overanalyzed mode. If you have to figure out the patterns, the right order and the meaning of your test results, the whole analyze phase can become really time consuming. Try to achieve a simple structure in the beginning and get a clear picture of what you want to find out.

Let’s put this in one simple guideline. (btw, that’s the only one)
A) Every question or task shall follow the same structure.

a. The number of the task
b. The task or question itself
c. 3 and only 3 response options
d. A comment field

The 3 response options are divided in

1. Immediately Successful
2. Successful with a hint, or after some time (please comment)
3. Not successful (please comment)

The test situation

Your user has the device and printed out, numbered tasks in front. The tester has a laptop or tablet with a virtual or real keyboard and the survey tool opened. The questions are prepared in a survey tool, as described before - see screenshot below.

Now follow your user through the tasks.

Ad 1) Immediately Successful: The proband reads, understands and successfully executes the function. Easy, check the first option in your survey tool.

Ad 2) Successful with hint, or after some time: If the user needs a bit longer than expected or struggles — don’t let the situation become uncomfortable and try it with a simple hint. Ask the question differently. If the user is successfully then, make a check at (2) and add a quick comment in the survey tool.

Ad 3) Not successful: Sometimes the task simply doesn’t work. Don’t torture your propand. If it’s clear this will not be a success, say something like, “That’s fine, let’s continue with the next question. We have to find a better solution here. Thanks for the feedback.”

Make a check at option (3) and add a comment in the survey tool.

As you can imagine, the challenge here is to quickly summarize the situation with a few words, without delaying the flow too much. The comments shall simply describe the reasons in a way, that the tester still is able to understand it after a couple of days.

The Result

What is generated with this approach is the FILTER I was talking about before. All immediately successful answers can be filtered out, because these tasks work and shall no longer generate cognitive amount on our side.

We need to concentrate manly on option 3 and 2. The point here is, that the comments contain the value. I personally never have more than 5 to 10 users per test, so it is manageable. 
Let’s consider an example.

Task 1 — Please Login to this site. (5 users in total)

Possible result scenarios:

a) 100% Immediately Successful (or 5 out of 5): 
 This task works in this environment and does not need to be considered for this test phase anymore. Done, without one second of additional analysis.

b) 4 immediately successful, 1 successful with a hint, or after some time:
 Take a look at the comments of the one user and see what happened. 
 It depends on the circumstance and your experience if a redesign of this task shall be considered. 80% success is still on the save side.

c) 3 immediately successful, 1 successful with a hint or after some time, 1 not successful:
Without much further analysis a redesign of this task can be recommended. What needs to be redesigned, again, shall be found in the comments.

c) looks like this in a survey tool

When you now scroll through your task results, you immediately get an overview and a feeling of where to start with your redesign.

Ad a) is clear, make a checkmark. It works.
Ad b) can require a redesign, depending on the comments.
Ad c) is also clear, redesign the solution.

→ Ideally you start with c) improving the worst issues.

Then continue with b) as this needs a bit more detailed considerations, regarding the comments, redesign is mostly recommended, but maybe not completely, slightly improvements can be enough.

See what happened? The value lies in the comments, ok, that requires some effort. But the point is, when you structure your test in the way described you receive a pretty awesome and presentable result with a tool like e.g. surveymonkey.

Means there is not much analyzing necessary and
you see very fast on which tasks you have to concentrate on.

Remote Test

One of the things I mostly like at this method, is that you can also experience the user tests live, e.g. if you don’t do it by yourself. Simply open surveymonkey e.g. on the mobile and see how users are progressing live.

And this method also works pretty well remotely, via desktop sharing, of course.

One last thing. Yes, User Tests hurt. To me, too. Every time I show a first prototype to someone else, this feeling appears. Becoming nervous. What is she doing? Why doesn’t she get this simple thing, etc.? Sounds familiar? Good. That’s part of the game and also one reason why still so few test their hypothesis with real users.­­ Learning to appreciate the pain, to improve, is an important step.

Like what you read? Give Erhard Wimmer a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.