Photo by Blake Wisz on Unsplash

Testing Fast And Cheap: A Quick Guide To UX Research. Part 3.

Design iterations: the proof of concept.

Galina Kalugina

--

Experience design is not science. There are no text-book rules backed by tons of research conducted over a hundred years. Moreover, the profession as we know it only exists for the past few decades. Which means, all the accumulated knowledge we have now is empiric rather than scientific, and it still has its voids. Common sense is not the most solid turf to build on as well, given mental models of different people may vary a lot depending on the profession, interest in tech, and life experiences. That’s why we should be ready to test our ideas regularly and get rid of them as soon as we find an opportunity to work on the best ones.

Moderated usability testing

Conducting this experiment, the researcher observes the behavior of users while they complete specific tasks using the product in question.

When it works best

It comes in handy when you aim to find valuable insights on your product weaknesses by gathering qualitative data. As professionals, we are interested in tech, which leads to a good understanding of it. Most people prefer other areas of human knowledge, so they use tech for work or everyday routine rather than for pleasure. Hence, some of our assumptions as designers may be incorrect. Observing the way people interact with our product is a great way to find out whether we understand their needs right.

How to prepare

Define the key scenarios. You may want to run every possible situation through your respondents, but that won’t be effective, because people tend to get tired. It’s better to test 2–3 scenarios at a comfortable pace than try to cover all of them skin-deep.

  • Learn how to experiment
    Check out the guideline for usability testing by Google.
  • Write a script
    Moderating the experiment may be an overwhelming task. Spare yourself some unnecessary stress: prepare a script to rely on. Learn it by heart as well — that would help you navigate through the experiment.
  • Make a test run
    Everything may go not as planned — double-check if there are any issues before recruiting respondents. Ask your teammates to assist.
  • Fix the issues
    The first trial will likely be imperfect. Take your time to improve it. Schedule some breaks to reflect on the test run process and fine-tune the script. Take a deep breath.

Which tools to use

  • Mobile/action camera — you are not going to film an action movie, so professional gear is not essential.
  • Camera holder — the cheapest table mount would do the trick.
  • An empty room or any other quiet place — it’s great if you have a meeting room, but a quiet corner in the local cafe is ok too. Don’t forget to ask your respondents if they feel comfortable about it.
  • Laptop or mobile phone with your prototype ready to go.
  • Tea, coffee, snacks — a friendly and informal environment would help respondents relax and behave as usual.

How to conduct

  • Design the experiment
    Define your key questions and put together 2–3 tasks which would help you to gather most answers.
    Example: you are testing an online apparel store.
    Bad task:
    Buy something pretty.
    Another bad task: Search jeans, then select the color blue and medium-size and hit “Add to cart” button…
    Good task:
    Are you going to shop for clothes sometime soon? What do you think about buying? Buy this or similar item on our site using this test credit card.
  • Recruit respondents
    In most cases don’t have to find people who exactly match your target audience to test interaction patterns, since those patterns are usually universal. Ask your friends or colleagues from non-techie departments to participate in testing. Don’t hesitate to reach out to your co-workers to help you with recruitment. Remember to give credit to people who helped and to share your results with them.
    Make sure your recruits aren’t tech-savvy. Otherwise, it may appear as the interface performs well while, in fact, it’s respondents who are unusually good with tech.
    Here’s a little anecdote about that: I’ve been conducting an extensive usability testing on somewhat complicated features. My respondent was a gym receptionist, and she adored all the features immediately — there wasn’t any learning curve at all — she dug right in. I grew suspicious and started to ask her about her interests, hobbies, and previous experiences. Turned out, she studied to be a personal trainer at the moment, but she still had strong ties to her previous career — a software engineer. The agency we hired to recruit her only asked about her job, but not about her education. It was an hour well spent, all right, but, unfortunately, I had to exclude her results from the set.
  • Prepare the equipment
    Start with the product you are going to test: confirm that everything works well. If you’re testing a prototype, double-check all the links. If you work for a foreign client remotely, translate UI copy for your respondents. Set the camera before your first respondent shows up, make sure that the screen of the device you use for testing is visible on the video. Check if all the devices are charged and keep the auxiliary power source ready. Don’t forget about test accounts and credit cards if needed — people usually uncomfortable sharing their personal information, especially when the software may be unstable.
  • Moderate the experiment
    This part may be difficult if you’re testing your work because outcome not always (pretty seldom, actually) matches our expectations. Remember the ultimate goal — to create a better product and to become a better professional, not to appear right in a particular situation. So, don’t lead participants in any way and don’t give them hints unless they are completely stuck and frustrated. The only reason to give a tip is not to let your experiment end prematurely. Don’t cheat ;)
  • Process the results
    The approach to processing is the same as for interview and field observation.

How to deliver results

The ultimate advantage of this experiment is that you get immediate insights. For your deck, you may group issues by kind (software bug, logical errors, visual design slips, unclear UI copy, and so on) or by severity from critical bugs to minor improvements. You also may want to cut out the most insightful moments of your experiment and show them during your in-person presentation to the team. Avoid embedding videos to a deck though — it may cause technical problems.

I suggest the following structure for the slides:

  • Bug description (+ kind);
  • The reason to fix it;
  • Severity;
  • Possible solutions.

If the number of issues is extensive, you may add photos and some quotes from your respondents to build empathy in your team and to enliven the deck a little bit.

When it fails

  • Researcher jumps to conclusions during the experiment
    Detach yourself from the prototype in question while experimenting. Don’t think of the solution immediately after you see the issue. The goal is to find as many bugs as possible rather than hot-fix a few interaction issues. Also, by occupying yourself with finding solutions, you risk drawing your attention from something important.
  • The task is very detailed; the researcher asks leading questions
    If you give step-to-step directions to your users, you will never know whether they would find the way on their own.
  • The task is too vague; questions are also common
    General queries lead to general answers. The goal is to learn about weak links of interaction, not to gather opinions.
  • Respondents are overly prepared
    The majority of people in the world are neither tech-savvy not interested in tech much. As UX designers, we communicate mostly with people who know a little something about software. So sometimes we fall under the impression the world is way more advanced than it is. To avoid the confirmation bias, we should aim to recruit respondents who don’t have any tech aspirations.
  • Respondents don’t understand their task
    In most cases, it doesn’t matter whether respondents match your target audience exactly or not. Though if you are going to test a niche software for skilled professionals, you should search for users, who have a solid understanding of a subject area. Though, if you are going to test a site selling toys and your marketing team believes the target audience is 30 to 40 y.o. middle-class women, it’s still ok to test your prototype on 23 y.o. male student.

Unmoderated usability testing

It’s a kind of usability testing which is conducted remotely with respondents completing tasks using their own devices in the comfort of their homes.

When it works best

This method is suitable for occurrences when you can’t be physically present at the same location as users or when available users are not representative.

How to prepare

The preparation routine is mostly the same as for moderated usability testing. The difference is that you have to make sure everything in your task is crystal clear. You won’t be able to fine-tune the task as you go.

Which tools to use

Prototypr made a pretty good review of the tools for their blog. I suggest checking it out.

How to conduct

  • Design the experiment
    The way to do it is the same as for moderated testing, though you should keep in mind respondents are going to use their own devices. Hence, different OS and hardware performance.
  • Direct the experiment
    Yes, I know how it sounds. But in this case, you are not as time-restricted as while doing a moderated test. Therefore, it won’t hurt to check out preliminary results and correct the tasks if you think it would work better. Test an experiment before running in any case, though.
  • Process and deliver the results
    Again, not much difference in the approach here. The one thing I want to warn you about is that in the case of unmoderated testing, you likely to deal with respondents who test a lot of pieces of software rather often. As a result, they inevitably become more proficient than the majority of users. In fact, it takes a somewhat savvy person to moonlight testing software, meaning, they weren’t “regular users” to start with. I don’t say they are all tech experts, but I’d recommend dividing positive feedback by two and multiplying negative ones by ten.

How to deliver results

In terms of presentation, moderated testing doesn’t differ from moderated so much. The only thing I encourage you to do is to make sure your software performs well on any hardware. It gets overlooked from time to time, so it won’t hurt to remind stakeholders about this issue.

When it fails

Remote unmoderated testing has the same list of possible issues moderated one may have.

A/B testing

This is a comparison of two otherwise identical samples, A and B, with a single variable designed to find out which one performs better.

When it works best

This method works best when you need to decide between two seemingly equal options. For example, the color of the button or a copy for a Call To Action. However, it’s not the best choice if you want to find an answer for somewhat complex questions.
Another anecdote: we once had a conflict between jr. designer and jr. developer at one of the companies I worked for. They couldn’t agree on the color scheme for the app, so software engineers wanted to put the question to vote. I considered that a dangerous precedent, so I suggested an A/B testing instead. Turned out, there wasn’t any significant difference between the two options, though a developer’s pet beat the designer’s one in 3%, so it won. Everyone calmed down and proceeded with more critical tasks. Also, we made it clear for developers they can’t take control over design decisions just because they outnumber us. Not under my watch.

Which tools to use

  • Optimizely — the platform which allows conducting several kinds of experiments, A/B tests included. It has lots of features, but it’s not free of charge, so you may want to learn if somebody in your company is willing to use it as well. Otherwise, it probably wouldn’t be the smartest investment.
  • Google Forms — free of charge, has all the features needed.

How to conduct

  • Prepare screenshots with two options
    It’s crucial to have only one variable. Otherwise, you won’t be able to identify which one impacted the results. If you have more than one question, run several tests.
  • Prepare follow-up questions if you have any
    Sometimes it makes sense to ask why people made their choice, because their reasoning may be a sound basis for future decisions in similar situations.
    Example: someone in our team was concerned that a UI element looked like an error message because of the shade of orange we used, so we asked users about it after testing. They said it didn’t, though they didn’t like the whole color scheme anyway. So, we eliminated one of the concerns of ours, along with choosing a preferred option.
  • Upload and distribute them
    Share the link in your social network or ask the SMM team to help you with the distribution. Don’t be shy to ask for their assistance: in general, people like to be helpful. Do not forget to give them credit for their help and share results if possible.
  • Wait till you get a significant number of responses (I suggest 100+)
    If they’re too little responses, the result won’t be representative. When three people vote, one option wins by 33%, though, one person decides. When a thousand votes, one person doesn’t have that power.

How to deliver results

Given the simplicity of the experiment, it’s ok to end up with pretty basic one-slide deck featuring screenshots in question, their performance rate in percents and the total number of respondents.

When it fails

The most common reason for this experiment to fail is the complexity of the samples. If you vary more than one element, for example, the color of the button and the label on it, you can’t tell for sure which factor contributed to the result. Meaning, you can’t fully rely on it. If you want to test two ideas, run two separate experiments.

Other meanings

In marketing A/B testings works a little bit different: marketers may compare two or more significantly different designs or even concepts for the developed products and track the outcome with clickstream analytics software. That’s neither cheap nor fast, and unlikely to be done by the team of one, so I won’t overview this kind of testing at this time.

Usability testing is not for a faint of heart: a lot of designers get conflicted about it. On the one hand, it sounds like the right thing to do, on the other unpleasant things might come up. And, well, they usually do. But hey, we grow by learning, iterating, and learning some more.
No pain no gain ;)

Previously on Testing fast and cheap: “Pre-design stage: Idea Hunting.”
Up next “Life after The Release: success evaluation.”

--

--