The User Testing in the Time of Pandemic*

Stefan Ivanov
Ignite UI
Published in
6 min readAug 26, 2020

As a UX Manager, the quality of research is just as important as the quality of design for me. And very often the overall quality of our team’s output is tied to how we can collect feedback. In recent years, we have been constantly optimizing our processes given the constraints we work in, and in this two-part article, I want to start by share with you my practices for getting the most value out of user testing. In this first part, I will stress the importance of creating a user test plan containing observation activities that lead us to one or more well-formulated hypotheses for which we collect quantitative and qualitative feedback.

Have a Plan 🗺

So, let’s start with the creation of a plan. I know it is quite obvious and you probably have heard it a thousand times, but it is one of the most frequent mistakes I see from junior people. From my early days as a university student, I was cultivated with this mindset and am very strict about it, but the majority of people I meet don’t even have written down plans and goals for their efforts. As one of my favorite authors once said:

Plan your work and work your plan

Napoleon Hill

When it comes to user testing, in times of social distance or social attraction (or however the next normal is called), UX designers and researchers must remember to write down their observations, suspicions, and goals as a starting point for creating a plan. This is formally known as formulating a hypothesis for your research. An example of a null hypothesis, which I prefer when creating a user testing plan, would be that the number of users that click on the “DOWNLOAD TRIAL” button on my website is not dependant on its color and location on the website. From this hypothesis, we determine the variables we control such as the style of the button, and the ones we plan to measure i.e. the percentage/amount of users that click it. Last but not least the hypothesis guides us for the design of the prototype or mock-up that we create to confirm or refute it. If my hypothesis rings a bell, you recognized correctly that my example is often the trigger to run some A/B testing which is not to be confused with user testing, even though it also involves users, talking of which leads us to our next section.

With or Without Users 👨‍🚀

Now, this may come as a surprise to some of you thinking but how can I have user testing without users. You are right, you can’t, but to create a good hypothesis, you often need to either observe users and their behavior or run another validation experiment such as heuristic evaluation. In my work, I have been using both and for observing user behavior online, I find particularly useful tools like hotjar. A few years ago it took us to the conclusion that our customers were frequently looping between demos, documentation, and marketing pages, which motivated us to design a more integrated experience between the three.

A product page on the Infragistics website
Our current website creates a much more integrated experience between marketing information, product pages, demos, and detailed developer documentation.

Before I jump over the heuristic evaluation I want to make one more point. Thus far, I have mentioned A/B testing and tools for observation of user behavior and heatmaps like hotjar, these usually involve feedback from the largest range of users. Our core topic today is user testing, which is usually limited to a much smaller number of users, and last but not least comes the heuristic evaluation that is achieved with the involvement of a very small number of experts. I run heuristic evaluations as often as possible and usually do them for the marketing pages of a particular product. Last time I did it, it uncovered several stylistic decisions that tried to be aesthetic but at the end of the day frustrated the people, who took part in the experiment. Running a heuristic evaluation is relatively simple, you need a set of heuristics, I usually use Jacob Nielsen’s Ten Heuristics for User Interface Design and involve UX students in it. After we identify the problems and qualify them with respect to the rule they break, we collectively decide on the severity and compile a list that may contain either a fruitful soil for planting some hypotheses, or specific guidance on solving the problem, when it is a more trivial one. This method tends to result in lots of qualitative feedback based on personal observations and when it comes to user testing, the next thing to remember is to always combine qualitative and quantitative methods for collecting feedback which we will discuss in the next section.

Quality vs. Quantity 👨‍👩‍👧‍👦

When it comes to user testing, I always combine a quantitative technique like a remote user test through tools and platforms such as Indigo.Design with a qualitative method like a short survey or feedback form that allows participants to share their thoughts. I have tried sharing links directly from Sketch or Adobe XD, but this approach allowed me to collect only qualitative feedback and usually unveiled only major problems in my design. I have also done user testing the traditional way, observing users as they complete a list of tasks but the need for moderation and observation introduced big overhead for me even if I did it online through screen sharing and similar approaches. The unmoderated remote user testing that the Indigo platform provides, turned out to be not only a suitable strategy during a pandemic, but also kept my day open and gave my users much more flexible terms for participation. It not only quantifies the test for me but also allows me to record and review screen capture from the device of my user and record audio through which I can use the advantages of the think-aloud technique. Furthermore, I love the prototyping experience I get with UI Kits for Sketch, Figma, and Adobe XD that streamlines my prototyping and cuts down the iteration time in half for me. If you are curious to know more about the whole process, there are a ton of useful video tutorials.

A study showing the tasks on the left and details about each one with users that succeeded or failed at them.

Before I jump to the quality I want to make one final remark about the quantitative approach, and it is to always take into consideration learning effects. Or to put it differently, eliminate learning by randomizing the order of your tasks. Luckily, the Indigo.Design platform allows that with the flip of a switch upon setting up your usability test. The final step of that configuration is a place where besides thanking your users for their participation, you can also share with them a link to a feedback form asking them questions about different aspects of their experience. You can use a more formal approach with forms like the NASA Task Load Index, or a less strict one like in this JotForm Template which you can customize to your needs. The specifics are irrelevant as long as you remember to take advantage of quantitative and qualitative methods in parallel.

Continue to part 2…

*I have to give credit for the title to the Nobel winning author Gabriel García Márquez and his novel Love in the Time of Cholera. My love for prototyping and usability testing contribute to this association with an author I admire.

--

--

Stefan Ivanov
Ignite UI

I have been doing UX design for more than 10 years and helped companies, establish, grow and optimize their design processes and culture.