Usability Testing in BI - you need to know your weaknesses and fix them

Ivett Kovács
14 min readJan 19, 2022

--

source: unsplash.com

Have you ever encountered your dashboard fails after its release when it meets its biggest critics — the end-users? You developed a report to the best of your knowledge but only a few people used it? To avoid this catastrophe, make sure to validate and test the functionality, the ease-of-use, and the look and feel of your report with real users.

Everyone agrees that usability is important, but truly everyone knows what it means exactly? My experience is Project Managers and stakeholders rarely know what this term means but in most cases they use it while referring to software, websites not BI products. Or they consider Usability Testing and User Acceptance Testing (UAT) to be the same thing. For this reason, the UAT is performed before the release, but the Usability test is just omitted.

User Acceptance Testing ≠ Usability Testing

In this article, I focus on why usability testing is an essential part of driving value in BI products too.

UAT is more or less functionality testing.

The main purpose of UAT is to validate end-to-end business flow. It assesses the report/dashboard that can support day-to-day business and ensure the report is accurate for business usage. It does not focus on minor or cosmetic issues, like colors, brand identification elements, spelling errors, or software glitches.

UAT gives answers like:

  • Does your report/dashboard meet all the requirements that were set out at the beginning of the project?
  • How is the performance of the dashboard?
  • Does data load and refresh quickly when you filter or make any actions?
  • Do extracts refresh in an acceptable time?
  • Is the data correct and accurate?

Throughout my BI dashboard developer career, I have encountered many people who believe that their company/department conducts usability tests but in fact, they made only UAT, and then they did not understand why their users were not using those dashboards. If you’re not checking to see you’ve made the right report and what you’ve made works for users, don’t be surprised if no one will use the dashboards.

What do organizations lose by not incorporating a usability testing process?

Because it is time-consuming and risky that the report may require further development, due to which the deadline can no longer be met and, of course, if something takes time, it will also cost money. Cost, time, and project hours dominate the projects.

  • But it is better to have made something that no one uses?
  • Why have thousands of dollars been paid for development but the report has no added value within the organization?
  • Why there are more than 100 reports on the server but only 2–3 are used?
  • Why do managers think that if the Data Quality test is done, the project is over?

Not knowing that a usability test can actually save them hours of re-do time when they have to re-tool the dashboard when users “hate” it.

Usability testing is a research activity

where you can reveal the reactions and behaviors of actual users who interact with the report in real-time. The goal is to understand how our users actually interpreted the views, charts we were displaying them, and how they actually interacted with all the filters and actions in a specific tool.

Usability is defined by five main characteristics:

  1. Effectiveness: How accurately can users complete the required actions?
  2. Efficiency: How quickly helps users reach their goals and answer their business questions?
  3. Engagement: How pleasant is the report to use? Does the look and feel appropriate for the company?
  4. Error tolerance: How often do issues occur?
  5. Ease of learning: How intuitive, easy to use, and learn to use the report/dashboard? How easily can they remember how to use it?

Don’t forget you are the one who developed and designed the dashboard, you are the one who knows the most about your dashboard, how to use it how to interact with it but it does not mean that your users could or would.

Usability is not about a “coolness” or “wow” factor. It’s not about forcing your personal beliefs or design preferences onto your users. It’s about task completion. - Shari Thurow, author of Search Engine Visibility book.

User satisfaction is directly related to task completion. If users can find answers to their questions using the dashboard quickly and easily, they often report strong and positive satisfaction, so the engagement of the report usage will be higher. If the task completion is difficult because the dashboard you built cannot support them effectively, the users often report low satisfaction and they won’t use it because there is no engagement.

Before we check the most crucial steps on how we can run Usability Test let’s clarify usability “rules” aren’t set in stone. What may have worked on one dashboard and its users may not work on another dashboard because it has different users and their needs may differ. In this post, most suggestions should be emulated and not copied.

Step 1: You need to define what kind of test you want to run

  • Moderated vs. Unmoderated

Use moderated testing to investigate the reasoning behind user behavior, and unmoderated testing to test a very specific question or observe behavior patterns.

I recommend creating the moderated version because it allows us to test users to keep thinking aloud and keep trying to answer our questions where they might otherwise give up if they were alone and we don’t hold their hands. In moderated tests, you also have a chance to watch (live or recorded) the session, you can replay it and see where and why they were lost at a specific point.

  • Remote vs. In-Person

Remote usability tests are done over the internet; in-person testing requires the test to be completed in the physical presence of a moderator.

Compared to remote tests, in-person tests provide more benefits, since you can observe and analyze body language and facial expressions while they learn the interface. You can actually see people hesitating on a Click To Action (CTA) button, moving their mouse looking for the right place to click, etc. However, most of us — dashboard developers and consultants — don’t live in the same city even in the same country or region as our clients so in-person testing is just not possible. You can ask the users to turn on their camera while they are sharing their screen but this never will be the same as you are in the same room with them.

Plus currently, we have to work remotely and not be able to see people. What we can do is to book a number of meetings with the users, give them access and follow up the day after to get their impressions, if they did not forget they need to test the report. Remote testing doesn’t go as deep into a participant’s reasoning, but it allows you to test in different geographical areas using fewer resources.

“I miss the days when I could test in person since I used to love to watch users look at a dashboard for the first time. When they encounter the navigation, did their eyes roll along the top of the dashboard from left to right as expected? What did they immediately focus on next? They pause to figure out the interface, or are they lost? Then the few minutes of them encountering and finding their way around the data. Are they interacting with the data the same way that I had envisioned? I have found that this works best with people that I have good relationships with. Since it is a somewhat personal look into the way that certain people digest information, it’s actually very personal. There is a moment of vulnerability where people are lost and they want to understand but might not yet. They can also appear to be conflicted too, the hard part is if it’s the dashboard UI or the data that caused this. That has gone away now with remote offices and not being able to see people now a good amount of my actual learning by observation has almost vanished. Now that we are working remotely, I have resorted to a number of meetings with the user. Normally I will give them access and follow up the day after to get their impressions. Then I follow up about 2–3 days afterward to hear any additional input. Yet it just isn’t the same as seeing them navigate and their facial expressions while they learn the interface.”- Cesar Picco,Sr Engineer Software at T-Mobile

“When I did usability tests before (2012–2014) we used to take the user into a room and ask them to navigate the tools while we took notes. Users knew and saw us taking notes so that always influence their behaviors. I guess this can be better done now if we can screen record the user without them knowing. The less interference in the process better real feedback we will have.” - Rodrigo Calloni, Visual Analytics Expert at Inter-American Development Bank

  • Quantitative vs Qualitative vs Analytics

There are 3 research methods that work well for UX benchmarking: quantitative usability testing, analytics, and survey data.

  • Quantitative usability testing. When users perform the predesigned tasks and you collect metrics (success rate/completion rate, satisfaction, number of errors/issues, and time on task) that measure the users’ experience on those tasks. If you are interested in how you can quantify the user experiences I recommend you this book: Quantifying the User Experience: Practical Statistics for User Research
  • Survey is the qualitative backbone of UX research and user testing. While the quantitative test provides finding the biggest opportunities in the report’s usability, the survey fills in the gaps between the users’ background, opinions, and behaviors behind their interactions. This method provides satisfaction rates and net promoter metrics.
  • Analytics. System-usage data (you can analyze the abandonment rates and feature adoption). I would recommend this phase after you did the Quantitative usability testing or/and Survey research to check the engagement of your report.

Step 2: Recruiting testing users

I am sure the first concern when usability testing comes to you will be how many people should join the test?

Steve Krug — the author of the Don’t Make Me Think, — believes that three users are enough:

“The first three users are very likely to encounter nearly all of the most significant problems, and it’s much more important to do more rounds of testing than to wring everything you can out of each round.”- Steve Krug

We cannot disagree with his statement but the reality is that users drop out of usability testing all the time. Hence the rule of thumb is that you need no more than five users to generate valuable insight. Nielsen shows that 85% of all usability problems can be identified with a sample of 5 people. Usually, after three or four users, the same usability issues occur, there are clear patterns.

“Testing with five people lets you find almost as many usability problems as you’d find using many more test participants.”

How can you find the right users?

It depends on. If you work on internal reports at your company, you have an easier time, you have access to potential users. If you work as an external consultant in a big company from overseas you might don’t have access to them. My experience is that most of the time project- or account managers just miss the usability test from the scope at the beginning of the project and when you come to that step the timeline won’t accommodate it. In this case, only the project manager(s) at that company will be able to help you because they are the connection between you and the users. They are responsible for giving you contacts and helping you organize the test session within the company. But if this phase is out of the scope, it means that it is out of the budget of the project.

👉All quotations should clearly state the resource and time requirements for the usability test.

Testing data dashboards in a business environment is difficult for many users. It means interactions with the interface like selecting filter or parameter values, clicking scatter plot to trigger other actions on the view, drilling down from chart to chart or navigating to another dashboard, etc. Users often need a certain level of knowledge to use the dashboard, to use that specific BI tool, to know what filters mean and how to read a chart. Representing visual information that they don’t know what to do causes that they may not be able to help you test the report effectively. For these reasons, it matters who your testers will be.

Inviting Participants for Usability Testing

Send recruitment email to them. The initial email should cover enough information for users to understand what will happen.

It should cover these parts:

  • Email title. Use a clear and direct subject line. Include the name of the business and the scope of the activity “XY Feedback Discussion”
  • Introduce yourself: Say your full name, your role, and where you work; thus, participants can identify you and the business you are working for.
  • Explain the context: Clearly state the goals and scope in 2–3 sentences. What service or application is being built and why?
  • Set expectations: Describe when the discussion will take place, where, and how.
  • Explain logistics: Internet access, laptop, microphone, etc.

Don’t use complicated words like “Usability Study” or “User Research”. Your average participant probably won’t understand these terms that’s why they might decline the invite. To avoid user anxiety use the term “activity,” “event,” or “session” instead of “test”

Here is a recruitment email template :

Invite Template

Hi,

My name is ________________, from the XYZ team. We are working on the xy dashboard to help you make actionable decisions by using data visualization. We’d love to get your feedback to best tailor-make our designs to cater to your needs.

The review session will record your voice and screen while you answer questions and do short tasks interacting the dashboard. Please speak your thoughts aloud. We want to hear your honest feedback as it will help us in understanding the efficiency of the report.

The event would take between 20–30 mins. Please take the activity in a quiet place — such as an office or your home. The recording will be confidential for quality analysis purposes.

We would love to get some time from you. Please let us know which day works for you and we could schedule the session.

Thanking you,

Yours sincerely,

Step 3: Designing the task(s)

Pre-testing Questions

Here are some background questions to ask your participants:

Uncover user demographics.

  • What is your current occupation and exact role in the company?

Understand the existing approach to solving the problem your product aims to solve.

  • What type of product do you use to do X action?
  • Please describe your experience with xy BI tool.
  • How often do you use it?
  • On a scale of 1 to 5 (1=not at all confident, 5=very confident), how would you rate your level of confidence in using xy BI tool?

Explore the user’s prior knowledge with respect to your product.

  • How much do you know about problem your report/dashboard is solving?(I don’t know anything about that, I know a little, but I could learn more, I am an expert)

Asking the right demographic and background questions pre-test, allows you to choose the right test users and gives usability testing results more context.

Testing Questions

In this step, the goal is to allow the user to interact with your report, while uncovering pain points within the experience.

Discover why users engage or disengage with the report.

  • How did you find the experience of the view (specific view or feature)?
  • If you were looking for xy information, where would you expect to find it?

Identify how can you increase the product’s ease of use.

  • How was the experience of using the product to complete this task?
  • How easy or difficult was it to navigate?
  • What are your thoughts on the design and layout?

Explore what improvements should be made in order to align your product with user expectations.

  • What motivated you to click a specific interaction?
  • Do you have any suggestions on how this action could be made easier and more understandable for you?

Post-Testing Questions

The Post-Testing part can summarise the end-to-end user experience and impressions with the report. It evaluates what needs to be changed/develop within the report, such as a feature that feels complicated in that current version.

Example questions:

  • How would you describe your overall experience with the report?
  • What did you like the most about using this report?
  • What did you like the least?
  • What, if anything, surprised you about the experience?
  • What, if anything, caused you frustration?

Here is an example form in Notion — Report Usability Test Template. It contains an overview, user profiles, and step-by-step test pages. You can duplicate the file then easily customize it and export the tables. If relevant, include recommendations for things that need to be changed.

https://cdn-images-1.medium.com/max/2400/1*Z9hNVu6oV7vHBuiw4p4-3Q.png

Step 4: Test Session Day

Let’s assume you are running the session yourself: here is what you do.

  • Make sure your participant understand what’s going to happen during the test session.
  • If you’re doing it remotely, make sure they can hear you properly and there is no technical issue.
  • If you are recording the test ask for their permission.
  • Always start with a friendly conversation. During it, you can collect answers to the Pre-Test Questions.
  • At the end of the session reserve some time to ask follow-up questions and collect the user’s final feedback. Be sure to thank them for their help.

Step 5: Analysing user feedbacks and insights

A successful testing session doesn’t get you fully over the finish line. At the end of the process, you should create a final report to share your insights and findings with the rest of your team. Collected information and feedback about their pattern of use, you can start to identify what works or doesn’t work. Categorize and rank the list of problems/issues based on their impact to know which ones need to be addressed first. Like a usability issue that makes it hard for users to find the Click To Action (CTA) on the dashboard might be a more critical issue than a misspelled title.

  • Critical(Usability catastrophe): Impossible for users to complete it. Fix this before the report can be released.
  • Major: Frustrating for many users.
  • Minor: Annoying, participant stops to think, but proceeds.
  • Low Issue: Include cosmetic issues that are necessary to fix. Like a misspelled word in the chart title
  • No Issue

Make it as soon as possible after testing so that the observations are fresh in your mind and make changes for improving the user experience of the dashboard(s).

If you use Notion’s tables (and of course, you can use any other tool) you can easily build this simple but effective Tableau report. It will help you back up the findings, and make it easy for you and the team to understand the problems. You can also highlight user comments or recommendations to provide relevant context around an issue.

👉 Share the report with other stakeholders to see the value of your method and make their mindset more user-centered.

https://tabsoft.co/3qILKz6

In a flawless world, usability testing is an iterative process that happens several times throughout the development of a report.

I think the main point is to not deliver everything on a very first version and quickly start small sprints. Sprints are great for including some kind of user testing, user validation, because it concerns only a few items/features to review. - Ludovic Tavernier, Data Visualization Specialist

We are also aware that the more usability tests you run, the higher your project budget will be. I really advise you to budget for at least one round, I promise it’s guaranteed to pay off.

Summary

If we want our designs to fit their goal, we need to honestly critique our work. Usability testing allows you to view your work from a fresh, unbiased perspective that may contrast with the early development scope you’ve already outlined. It allows you to learn something that can help you improve your reports before the release. And don’t worry no one has a perfect usability testing process, we all make mistakes. This process is new in the BI products world but we need to learn it and adapt it to produce a better user experience.

I’d love to hear your experience and your suggestions. Let me know in the comment section.

Resources:

Notion template’s idea comes from Ines Duvergé

https://usabilitygeek.com/usability-metrics-a-guide-to-quantify-system-usability/

https://researchloop.net/2020/12/09/how-i-write-an-invitation-email-for-usability-interview-participants/

Pásztor Dávid: UX design book

--

--

Ivett Kovács

Data Visualization Designer | Tableau Zen Master & Public Ambassador | Grayscale Studio