OPINION
Why the idea that ‘anyone can do user research’ is a bad one.
There is a perception that user research (UR) is something that anyone can do - that you don’t need any kind of ‘formal’ research grounding or training. All that’s involved is drumming up a few participants and writing some kind of research session script — nothing but a few questions, really. But you can always get an AI to do that bit (you can even have an AI suggest research methods, write a research plan). And once you’ve got some data, you just group things into tidy categories and report the findings: ‘this is what they said, this is what they did’. Now you can even get an AI to do that bit, too, using one of the much touted UR platforms. So what’s wrong with that?
Where do I start!? Crap in, crap out, to quote a line from a fellow UR colleagues. Why? Because if user research is something anyone can do then this raises the assumption that it doesn’t require any specialist skills or knowledge.
That being the case, it can’t be research. Research is a science that requires a great deal of specialised skills (implicating something that is learned through training) and a lot of knowledge grained through experience (implying development through practice). So, if you’re one of those who thinks that anyone can do user research, then please don’t call it research — because that isn’t.
To start with, let’s talk about who this is ‘bad’ for, and what is meant by ‘bad’. It’s bad for all of the decision-makers who make choices and take action based on user research evidence.
How is it bad? That’s simple. The risk of making poor decisions is amplified through questionable research process and evidence because the research is done by someone who isn’t trained or professionally experienced in research. That is, someone who knows little if anything about research paradigms, the nature of knowledge, methods or strategies and so on — never mind research ethics. They are untrained in how to identify and mediate for bias, or how to approach their own role in the research action (doing reflexivity) as an active participant.
Am I saying that only those with a PhD, or a Masters can do user research? No. I’m saying that people who do good user research are no different from those who do medical research, or psychology research, or data science — they are researchers who are trained, disciplined and experienced.
The justification for ‘anyone can do user research’ centres around two misconceptions: (1) that the UR’s values in ‘Return on Investment’ terms are hard to quantify; (2) that the UR is a drag on a UX team in an agile/lean methods context where rapid iteration and continuous user feedback is essential.
So where did this come from — what are the origins of the ‘anyone can do user research’ fallacy? First, it stems from the widespread recognition of the importance of user-centered design (UCD) which, arguably, reached fever pitch during Covid when the whole world was forced to go online. With the UCD approach reliant on ‘user research’, this led to high levels of demand, with Agile and lean methods forcing the pace. There weren’t enough skilled and experienced URs to go around.
The solution was to democratise UR: this had the aim of empowering cross-functional teams to get involved in user research with experienced user researchers as the enablers and coaches. This would deliver the joint wins of supporting the evolution of a user-centred ethos across the organisation and satisfying the demand for speedy, iterative user feedback. Unfortunately, over time, the original intentions of the democratisation lobbyists have become severely diluted to ‘anyone can do it’, and without the presence of an experienced UR.
What those two things — ‘justification and origins’ — tell you is that user research as a service is not necessarily in dispute — what is in the headlights is the trained and experienced user researcher. ‘Cos anyone can do it. In this scenario, at best user research is done tactically rather than strategically, distancing UR even further from any sense of real currency in return on investment terms at the executive level.
And that is where the proverbial shotgun threatens to shoot the foot. If you take the experienced and skilled UR from the UX equation and replace them with ‘anyone’, the whole UX / UCD enterprise starts to unravel.
Then we add in AI, now increasingly promoted as the ‘power’ behind the popular user research service platforms. They market their products as enabling the seasoned user researcher to do more, faster. Perhaps I’m a cynic, but do you really believe these tools will be confined to ‘streamlining and augmenting’ the UR’s work? Heck, no. We’re already seeing those platforms marketed as the magic wand that enables anyone to do research. (Before I get labelled as someone who ‘can’t move with the times’, yes I have used AI often in my user research — just not in qualitative data analysis.)
If the aim is to avoid risk in UX decision-making, that makes URs risk adjusters — our work mitigates the gamble. You’d think with that being the case, you’d want to best user researcher you can find, wouldn’t you?
Let me make two rash assumptions here. First, owners of digital services want those services to be successful by solving users’ problems with usable and desirable experiences. Second, that owners of digital services therefore value good and reliable knowledge about who those users are, what they need, what they’re trying to do and in what contexts.
So reliability and validity are what the game is about. Research that works.
The question is, how should UR work? UR is research, right? Its claims to validity and reliability are just like any other kind of research. And that’s my point. Valid and reliable research is not something that anyone can do (sorry!). It takes training and discipline because research is about a heck of a lot more than asking people a few questions or watching them while they attempt a task using a prototype design.
The weakness in the arguments that tend to be trolled out by experienced UR’s when responding to the challenge of ‘anyone can do user research’ is they all rest on stating the obvious. Ability to drill into the heart of the problem, for example, ability to be sensitive to ethical considerations, ability to empathise, ability to apply the best method to a research study and to make sense of data, ability to communicate complex scenarios in ways that all understand find compelling, for instance. These are all abilities — talents, skills or proficiencies — THAT MUST BE LEARNED AND PRACTICED.
They are also mostly soft skills which are difficult to define and therefore quantify. So they make for weak arguments. What we need to equally emphasise are the hard skills that are easy to define and measure.
In 2007, noted academic Mark Saunders developed something he called the ‘Research Onion’ to help students and researchers navigate, understand and apply the notoriously complex business of designing/conducting a research study. The famous ‘Research Onion’ has been an internationally applied framework for learning about and practicing research — ANY KIND OF RESEARCH — ever since.
In its original form in the context of academic research, it would certainly be a challenge for user researchers to attempt to apply this. So, I’ve created an adapted version of Saunders’ original to produce a User Research version [see image above].
It represents the series of decisions that the researcher should go through when planning and implementing research. Straightaway, let me say that this is not a long, drawn-out process. For the experienced UR, it’s a matter of a few moments because it’s already in the research mindset. The ‘Onion’ is also a practical guide to maintaining research rigour and discipline through each step of the study.
This, my friends, is the technical side of research. It’s the foundations to good research — including user research — that any researcher who is trained and educated in their chosen discipline understands and works within.
So, can anyone do user research? No. Not unless they are trained with their practice honed by learned experience and discipline. So, to rely on ‘anyone’ doing user research is a fallacy that will, quite literally, come back and bite decision-makers who will then look for scape-goats in only one direction — user centred design.