The quick and the dead

User research in a world that doesn’t have time for it

Dzacqueline
Vendasta
6 min readApr 17, 2020

--

Silhouette of lonesome cowboy riding horse at sunset, Vector Illustration

In the world of software development, we seem to be at a point where everyone agrees user research is good…when there’s time for it.

It’s a sad thing to come to grips with, but we are UX researchers gosh darnit! And if there is one thing we’re good at, it’s taking things we didn’t want to hear and turning it into something useful for most people.

Alas, part of being quick is actually knowing what to do.
Most people tend to agree that ‘learning by doing’ is the most effective method to learn anything…but like all good things, it takes time and you have to arm yourself with some knowledge first.
So, we’re going to go back to kindergarten for a moment, and start with protocol.

Photo by Patrick Tomasso on Unsplash

The rules.

The do’s, the don’ts and, to get you there a little quicker, the “don’t you dare tell anyone I told you to do this”.

Ask open ended questions! Do not ask leading questions!

How can you be sure if your questions are open ended or leading?
If your question can be answered with a “yes” or “no” answer, it’s not open-ended, it’s leading and you should rephrase in a way that it will require a more in-depth answer.

Avoid implying negative or positive emotions as well.
Don’t ask “if X happened, would you be happy/disappointed?”
Rephrase it to “if X happened, how would you feel/what would your thoughts be?”

Examples of open ended questions:

  1. “Could you tell me about a time when you…
    <performed some action or workflow you’re interested in>?”
  2. “What are your thoughts on…
    <an idea or concept you’re trying to gather feedback on>?”
  3. “If <scenario> happened, how would you feel?
  4. If you had to change 3 things about…
    <some idea or design you’re looking for feedback on>, what would they be?”
  5. As backwards as it sounds, you can also use an open ended question to ask a direct question; such as reading your problem statement to the user and ask their thoughts on it.
    Example: I’d like to get your thoughts on this statement:
    “As a team lead, I have to manually enter data from my digital dashboard into a spreadsheet in order for my manager to see it, because they don’t log into our team workspace.”
    What do you think about that?
  6. “Why?”
Photo by Jon Tyson on Unsplash

Why?

While we’re on the subject of “why?”, don’t just stop at the answer you get; keep going.

Example: your interviewee tells you it really bothers them that they can’t export data from a table. You ask why, and they say the need to put the information into a spreadsheet so they can share it with someone who doesn’t log into the software.
Ok, that is helpful, but you still don’t know what their actual problem is. Who do they need to share it with, and why? Maybe it’s so they can share it with their manager who will make decisions based on it. Which decisions?
What is the end purpose of the data they want to export?

What problem does it solve?

The duct tape is for your mouth.

Bring 3 things to every interview and usability test:
A pen, a notebook, and duct tape.
Why? Looking at your screen and typing on a keyboard is distracting. Pen and paper should always be used over taking notes on a device. The only exception being audio recordings, but you should still take physical notes in case something doesn’t work right.

A fantastic, but not necessary bonus item would be a colleague to take notes for you while you conduct the interview.

Ask your questions and listen to the answer, the full answer. In-person interviews are ideal, but many happen over the phone/computer where it’s difficult to tell whether a person is done answering, or just thinking hard about how to answer. Get comfortable with awkward silences. About 5 seconds is how long you should sit in silence before giving a prompt or offering to move on.
I call it sitting with the dread. Just kidding, no I don’t, but maybe I should.

Several template iterations later…

Make a research process document for each project.
Mine is a document that lists:

  1. The problem statement and/or user story.
  2. Our targeted user persona.
  3. Assumptions, hypotheses and the metrics to be measured upon release.
  4. A timeline that illustrates the types and times research was carried out.
  5. TL; DR of said research, concerns/constraints and my recommendations and reasons for said recommendations.
  6. Any links, screenshots, conversations, recordings, etc that I have of the research.

Make it once, and then just use that as a template.

You can also repurpose a lean canvas, or use this slide deck.

Photo by Oliver Roos on Unsplash

Usability testing

Not user testing. We’re not testing the user, they’re testing the design.
Please let them know that; it will put your user at ease and improve the quality of the session.

Generally speaking, usability testing consists of having a person actually use your design or a prototype of it.

This is awesome and time consuming.
It can also be frustrating for people if the design isn’t intuitive.
Nothing is quite so poignant as seeing a person get frustrated while trying to use your(teams) design.
There are obvious pros and cons to this kind of usability testing.
User frustration is a bit is both.
It’s hard to downplay or ignore a usability issue when you can see a real person struggle with it.
Also, it’s hard to get someone to come back for another test if the first one was stressful.

This kind of usability testing typically requires a fair bit of preparation and time, as well.

Might I suggest a slightly modified approach?

Do your research, make your prototypes while getting feedback from your team(s) along the way, schedule your interviews (or ambush coworkers).
Here’s where it changes, don’t have them use your prototype (aka, “try to complete this task”). Simply show them the prototype and ask them what they would do if they wanted to perform the given task.
Once they have answered, ask them why they chose that option, record their answer and move on to your next question.

Bonus tip: Compile all the decisions these people make into a decision tree.
This is superior to a spreadsheet for readability, and a great visual representation of the data that is much higher impact when you show others than a wall of text.

If most/everyone can figure it out easily, congratulations; it’s intuitive.
If most/everyone can not figure it out, you’ll know where they fall down in the process, but your precious usability testers (and you better believe they are precious) didn’t have to suffer for it.

Photo by Fleur on Unsplash

Measure the impact

“OK, I did the research, collaboratively designed a solution, ran usability tests, iterated based on the results and now this thing is ready to be shipped. Whoo!
Done deal, am I right?”

Amazing job! However, no. You’re not right and you’re not done.

There is one more step in user research to complete the loop, and that’s measuring your impact and iterating towards improvement.
No amount of research and testing can replace the feedback you can get from people using live software.

You could follow up with users that were part of your research, but a less intrusive and complicated way is to simply monitoring the metrics you were trying to influence.
Are people using what was released? Do users drop off anywhere?
Have logins doubled?

Dig into this data, and you’ll begin to form a solid idea of what your next problem to solve will be.

--

--