Testing Fast And Cheap: A Quick Guide To UX Research. Part 4.

Life after The Release: success evaluation.

Galina Kalugina
8 min readJul 17, 2019

--

Setting KPI for UX and assessing the outcome is vital for a product in a pretty obvious way: users who are happy with the interface are likely to be loyal. But here I’d like to emphasize the importance of the research for our careers as designers. Processing feedback we grow professionally by learning where we went wrong and tailoring our approach. Every report makes us think more profound next time. Getting feedback from peers and seniors is a good thing to start with, but at the end of the day, even rock stars may be wrong. Objectives change, experiences change, technology change — the best way to stay in a loop is to verify what you know about users with actual users regularly.

Clickstream analysis

Clickstream analysis is the kind of research which is aimed to track users’ actions during their interaction with the product.

What to expect

This is a great way to get an unbiased quantitative data about users’ behavior. It works great when we need to find out whether users do what we expected.
Example: you have designed the wizard for logistics service, which offers several transportation options depending on the cargo sizes, delivery location, and several other parameters. There also are a few payment options, and the final price depends on that as well. So, you believe, you should simplify the product, because no matter how good design is, users are going to get confused. Your client thinks the choice is what their customers want. Though, they believe the better UI design would remedy the high bounce rate. You design as you said, but you determined to learn whether it worked and on which step do users bounce.

Note: though clickstream analysis works excellent for locating issues, it’s not handy to find a reason why the problem occurs. To investigate that you’ll probably want to run a moderated usability testing.

Which tools to use

The first thing I should mention here is that you can’t conduct this kind of research on your own. You are going to need assistance from engineers to implement the code in your product. Second, this kind of research has some threshold: you’ll need to watch a few educational videos to get accustomed to the interface. I recommend making a trial run to check if settings are correct.

  • Google Analytics is a powerful tool that provides more data than you will ever need as a designer. It’s free to a certain extent. For mobile apps consider using Firebase.
  • Mixpanel is a tool with a focus on data interpretation. Some features are free.

How to conduct

  • Do your homework
    Study the interface of your tool of choice and make sure you have enough resources to perform this kind of research. Figure out the metrics that match the questions you have, and the way to collect them so they could be representative. Seek competent advice wherever you can.
  • Get engineers and managers on board
    This is a complex and prolonged in time kind of research. Make sure you have all the support you may need. Not only the implementation of the analytics code matters, but also the will to use your findings, which depends on managers responsible for the product. Your insights don’t matter unless they are tasks in a backlog. Keep all the people who make this happen in a loop.
  • Make a test run
    Chances are you won’t be happy with the outcome of your first attempt to collect and interpret the data unless you have done this before. It’s perfectly normal given it’s a niche skill for a designer. Be patient and secure your success by testing the experiment before going wide. Make sure you understand the numbers you get, and you can make a comprehensive report out of it.
  • Pass the guidelines to engineers
    Describe your needs as clear as possible. Don’t neglect naming screens the way you’ll understand in the dataset, listing actions you’d like to monitor, etc. Engineers may see the convenient way to work with data differently, so don’t leave anything to chance.
  • Collect results
    Wait for some time after the release to check data. The right amount of time depends on the popularity of your product: for highly-loaded ones, three days is enough, and for niche ones, a month may occur just right. Avoid jumping to conclusions prematurely, because that might cause the confirmation bias and compromise the entire experiment.
  • Interpret results and act on it
    Numbers themselves have no practical value. Your goal is to understand what they represent. Working on report define the behavior trends rather than numbers, stress issues, and draft solutions. Statistics is not the point here; you only need it to substantiate your statements.

How to deliver results

The approach you take depends on the initial question and your findings. In most severe cases, I’d recommend drawing a user flow and mark the pain points or to compare the flow you initially designed to the actual one. The visualization would help your peers to make sense of the problem faster.

When it fails

The more complicated a method is, the more chances to perform poorly it has. Take your time to prepare.

  • Not enough skills to use the tool
    Interfaces of analytics software are highly loaded and may be overwhelming at first. Learning tools as you go is not an option here. Master it in advance.
  • You are collecting data for data
    If you don’t have a particular question in mind, you won’t probably get any answers. If you are gathering data hoping to make sense of it afterward, you are likely losing your time
  • There’s too much data to process
    Remember, that at the end of the day, you are the one who is going to interpret results. Have some mercy to your future self — don’t bite more than you can chew. Otherwise, you risk losing motivation before even finding any valuable insight.

Note: I often hear that designers expect to grasp the way users interact with their product via analytics. I believe, our job is to design the user flow not to figure it out post factum. The point of collecting data in our case is to learn whether we succeeded in directing users to their goal or not. So, the search for alternative ways people use your product should not be the [only] goal of the experiment. We are looking for objective feedback, not the reality show episode.

Diary studies

This is a pretty basic yet powerful method. Group of volunteers from the future users get an alpha of your product, use it on an everyday basis for a week or so, and write down all the impressions and comments in real-time.

When it works best

This method works in cases when researchers don’t have access to end-users or system is too specific to test it on anybody outside the client’s company. It also performs best if you want to assess interactions with postponed results like notifications or recommendations. It also works well when you are creating an update for existing proprietary software and looking for the way to make it more effective than it is now.
Example: you’re working on a ticket system for a bank. Helpdesk is distributed in different timezones, and there’re no offices in the vicinity for you to visit. Also, looking over one’s shoulder usually makes people uncomfortable and destruct the work process immensely. Asking them to write down their observations and concerns is way less invasive and potentially more informative.

How to conduct

  • Do your homework
    Learn everything you can about the system from your clients or managers. Make sure to understand the big picture before you start digging.
  • Prepare a task description
    It may seem crystal clear to you, but people you are going to work with may not be as savvy as you are. Explain in plain language what should they do and why. Ask them to take notes: what is convenient to them, what irritates them and why, what do they do if something goes wrong, etc.
  • Negotiate access to potential users
    Some clients may be resistant to give you any access to the actual users or be assured their managers got it completely. In this case, compromise. Listen to what they have to say carefully, but let them know that diary study would help you to grasp small details. And the efficiency of routine operations usually totals productivity. Managers are likely not fully aware of all those operations so that they may get on board.
  • Provide a safe space
    Assure your respondents nobody is going to have access to their diaries, and it’s completely safe to share whatever thoughts and concerns they have. And, of course, make it that way. State that you are looking for a way to improve the product, not to test your respondents’ performance.
  • Task your respondents
    Connect with them personally to explain the task. Encourage respondents to share their thoughts openly. Note how long you want them to keep a diary: for a few days, a week, a fortnight? Figure the length of time that would allow you to gather a representative amount of data but won’t be burdening for another party.
  • Collect the data
    This is where all the work starts. I suggest you make a spreadsheet and write down the issues users describe there. Chances are, those issues would be somewhat similar. Range them by severity and frequency of appearance to get a better view of the system’s weak links.
  • Follow up with an interview
    Since you can’t observe users on task, you are likely to have some questions reading diaries. Negotiate short interview sessions to follow up with your questions. And don’t forget to write them down while reading diaries :)

How to deliver results

I’d recommend putting Top-10 issues to your deck and add a few quotes from actual diaries or interview (without mentioning an author, of course) if appropriate.

When it fails

This method doesn’t require an advanced skill set, so it’s almost foolproof. The only potential pitfall is the miscommunication with your respondents. To avoid any ambiguity, speak with your respondents in plain language and verify if you’ve managed to explain what you expect from them as clearly as you can. In this case, you’ll avoid waiting a considerable time for information you won’t be able to use.

That’s all — I have nothing to add on the topic. The last thing I want to say is that you won’t probably run a perfect experiment every time you need to test something. Not every one of those experiments will provide you with a sparkling insight too. Anyway, don’t settle for guessing-keep looking for proof.

The truth is out there.

--

--