Quick User Research Methods: lightweight techniques to uncover insights

Harish Vaidyanathan
Accela Design
Published in
4 min readJan 5, 2016

User research is a fundamental component that informs what we create as the Product & User Experience teams. Through user research, we uncover insights about user behavior, attitudes, technology proficiency and a lot more which are factored in the design of a useful, functional and usable product. However, user needs and behavior evolve with the passage of time and to make great products, we need to keep up with this evolution

There are plenty of established research techniques to get at this kind of information. Broadly, they can be broken down into

  1. qualitative techniques — research which would help us answer hows, whys, whens, where and whats of user behavior
  2. quantitative techniques — research which would validate a hypothesis or quantify observations

There is value in doing both and the research objective heavily dictates the techniques that should be employed to get the best possible results. Through this blog post, I want to highlight lightweight research methods for lean and agile user research. By incorporating at least a few of these techniques, one can institutionalize a process of getting a continuous stream of feedback with minimal logistical effort. I will touch upon some qualitative and quantitative methods which can form an important component of closing the feedback loop on your end-users.

Qualitative techniques

Desirability studies — In this study, the user would be exposed to multiple variants of a user interface, which solve the same problem, albeit with different treatments. Users can tell us which variant they like the most and why. This would be crucial in early concept validation, where one is looking for a ‘buy-in’ from the end-user before taking a deeper dive into designing a full-fledged solution.

1. A screen grab of a desirability test on Usabilityhub

Unmoderated remote user testing — Although nothing can replace the value of an on-site test, it involves repeatable legwork of test-setup, the logistics of having moderators, observers and the eventual test wind down. With unmoderated tests, ideally, one can set up a test once, foolproof the setup and run it over and over with minimal intervention from the members of the research team. In addition, it can be recorded for socialization and archived for transparency.

Heuristic review — Heuristics are rules or guidelines, which serve as a checklist against which a product can be proofed. There are a number of heuristics available. Some famous ones include Jakob Neilsen’s, Ben Shneiderman’s design heuristics. Additional value can be brought by having members outside of a core product team to serve as experts conducting a heuristic review. The rationale here is that the core team are so used to their designs that they may not see things in the same light as someone with a fresh pair of eyes.

2. Nielsen’s design heuristics

Parallel design/Design Studio — In this approach, team members are given the leeway to design different solutions to the same problem at hand. The advantage of this technique is that one can marry the best of multiple approaches and integrate into a final solution. Although technically not a pure research method, more minds working on the problem at hand helps develop a more rounded solution. This can also be your avenue to internally identify the pros and cons of different approaches in solving a problem.

Quantitative techniques

Card sorting (Tree testing) — Card sorting is an exercise which can be extremely helpful in developing the organization scheme of a software/web application. In a card sort, users are given concepts or items, which they have to organize into logical clusters. This is beneficial to do when addressing the information architecture of a new product or one which is undergoing a major overhaul. Conversely, there is another technique called tree testing, where the task at hand is validating an organization scheme and users navigate through the scheme to find a specific item.

3.An online card sorting exercise on OptimalSort

Analytics (A/B Testing) — Analytics on an application can be a gold mine. Combine that with user profile information and one can hit a treasure trove. Analytics helps one get at information such as devices which people use, browsers they prefer, click paths users take on a product, event tracking on web pages and so much more. You can even throw different variants of a software solution (A/B testing) across the user populace and see which one performs better.

Intercept surveys — By now, we would have seen many sites having 3–5 minute surveys. The only caveat here is that there is a fine line between annoying the user and actually doing enough to entice the user to fill out valuable information for us. Offering meaningful incentives for the users to fill out a quick survey would definitely encourage participation.

First click testing —

A participant who clicks down the correct path on the first click will complete a desired task 87% of the time on a website. However, one who clicks down the wrong path on the first click, tends to be successful only 46% of the time’

says Jeff Sauro of Measuring Usability. With that said, results from such tests can help identify where users initially click on the interface when looking for what they want. If significant, the results from the study may necessitate an reorganization of the information architecture or hierarchy on the application.

Used in tandem with market research and product research, these techniques will hold product teams in good stead in knowing their user better and knowing that faster.

--

--