Whether a company is building software or providing services, it is required to make a lot of decisions. Some of these are big decisions, such as ‘what should we make?’, ‘how should it work?’, or ‘how can we make someone buy this?’. Other decisions are smaller, like ‘where should we put this button?’ or ‘what should this label say?’
User research de-risks decision making across the company, by improving the quality of information available to inform those decisions. It does this by running studies to learn about the people who will be using the software, and sharing this information with decision makers so that they can anticipate the impact of their decisions more accurately.
This means that the role of a user researcher can be split into two parts. The first role is to plan and run appropriate studies that find reliable answers to the questions decision makers have. The second is to ensure that the information learned from those studies is communicated accurately to decision makers, in a way that encourages action to be taken from it.
If a user research team isn’t trusted, it leads to a critical failure of the second part of their role. The quality of the study is irrelevant if decision makers don’t believe, or don’t understand, what was learned in it.
This is particularly important for the user research team at Reach. As a new team in an organisation that hasn’t previously had a user research team, we have to start building trust in the work being done by user researchers from scratch. In this post, I wanted to share some of the aspects I believe are important to achieving that.
Be open about the limitations of studies
Every study has limitations, based on the method used, and the impact on the data captured that those methods introduce. For example, a quantitative study will struggle to explain the behaviour or opinions it identifies with the appropriate depth a team might need to take action. A qualitative study will often not be able to describe how representative the issues or behaviours it uncovers are. Additionally, most study designs introduce a degree of artificiality which will be influencing the ‘truth’ of the behaviours and motivations captured, such as lab studies capturing behaviour in an unrealistic context, or focus groups requiring people to self-report opinions in a group setting.
Limitations can also be hidden by the representation of data, such as the absence of confidence intervals implying there is a difference between two results, when a statistical test would reveal there isn’t. It’s important to be open and honest about the limitations of a study, or what was learned, and it is much better to expose the limitations yourself rather than be called out on them. Being seen to be hiding or influencing conclusions will severely impact people’s trust in the work of the team, and reduce the impact studies will have.
Understand the role of research.
People have skills relevant to their role. That includes designers, who have expertise in visual, interaction, content or service design. This also includes product managers, who have expertise in balancing often conflicting objectives and constraints, and developers who have expertise in crafting software, and the implications of technological choices. All of those expertise usually make them much better suited at making decisions related to their domain than a researcher, whose expertise is in uncovering and understanding user’s context, goals, problems and whether users can use the things being built.
Because decisions usually require a combination of all of these expertise, there should be a separation between ‘learning information about users’ and ‘deciding what to do as a result’. Recommendations about what to do created solely by researchers can be lacking the appropriate depth of understanding of the other domains to be useful. For this reason, promoting collaborative workshops featuring all of the disciplines as a method of uncovering and interrogating potential decisions is more useful than just providing recommendations, and helps ensure that other’s expertise are recognised and deferred to. Recognising the expertise of others will help build trust in the competence of a research team’s core skillset.
Just give the facts
Half of the role of being a researcher is to communicate findings to teams. As mentioned, it’s important to separate the findings detailing what has occurred, from the opinions about what the company should do about it.
One way of achieving this is having a robust structure for how findings from research are communicated, that prevents subjective opinions from creeping in. For example, when describing the findings from a usability test, the team will describe:
- What feature didn’t work as intended for the user
- How the issue that occurred differed from the intended experience
- What was it about the software that caused that feature to not work as intended for the user
- What was the impact of this issue on the user’s ability to use the software
A well run study will uncover facts that allow all of these points to be answered. Those answers will be supported by data, and objectively true. This is in contrast to the decision about ‘what should we do as a result of learning this’, which has many potential answers, and opinions about which is the best action can often be subjective. Although a researcher may have ideas about what action they’d recommend, by maintaining a clear distinction between ‘what did we learn’ and ‘what should we do’, it helps reinforce the truth of the findings captured by a research team, and prevent them being dismissed as the researcher’s opinions.
Make the findings repeatable
To avoid subjectivity, it’s important that the results from studies are repeatable, and that another researcher would come to the same conclusions. Raw data from studies is often messy, but by following a rigorous affinity mapping process the raw data can be distilled into some clear conclusions, that would hopefully be replicated if done by another researcher. Other techniques, like a clear prioritisation process based on (mostly) objective criteria will help ensure that the findings are valid, and not overly influenced by the researcher’s own personal biases, or their relationship with the thing being tested.
Share findings openly
All of the work the research team does is documented in a way that should be understandable by any colleague, and is shared openly across the company on an internal team website, and with physical things stuck on walls, to increase people’s awareness that this research exists. By being open about the work done, not only does it help educate or inspire other teams to ask about user research, but it also opens the work up to critique and demonstrates faith in its quality — building trust in the work of the research team.
Do good work
Most importantly of all, do good work. Techniques such as internal reviews, and following a semi-structured process for running studies can help reduce errors, and ensure that any information given can be evidenced and comes from a robust, repeatable study. A challenge is balancing this quality control with giving individual researchers enough autonomy to be able to take the steps they believe are needed to find reliable answers, which is why the research process should not be prescriptive, and give researchers the freedom to take the action they decide is needed. By building up a track record of doing high quality work in a timely fashion, trust in the work of a new user research team will increase.
To conclude, everyone’s goal when building software is to make something successful, quality is an essential element of success, and understanding users makes achieving quality a lot easier and luck less of a factor in success. In order to have the kind of impact required to significantly influence the development of software, a research team needs to be trusted — ethical and robust research practise is a key part of being trusted, and very important for a new research team to demonstrate.