Two keys to data and product research.
Or why you don’t bring a sniper rifle to a fishing trip.
If you’re in a hurry, there is a shorter, corresponding slide deck available on Slideshare.
This article is written for what I’d like to call ‘data enthusiasts’. You have a motivation to approach data but no systematic education around it. You may be an engineer turned user researcher or an executive at a company that runs A/B tests for every major decision.
Approaching product research and the resulting data can be a daunting task with little instant gratification. There’s a seemingly infinite amount of concepts and definitions, different approaches and tools and, to make it all worse, a ton of jargon. Unsurprisingly, most introductions to data make you wade through long vocabulary lists to get started.
I don’t think that’s a good idea. Unless you’re a stats student you should use your time more wisely than reading up on the exact definition of ‘qual’ and ‘quant’ or spending a weekend understanding the difference between ‘confidence’ and ‘significance’.
Stay away from jargon and statistical deep-diving for your early steps towards data
There are two basic concepts that I would really recommend wrapping your head around:
Think about how putting together a jigsaw puzzle usually involves two phases. In the first phase you’re mainly concerned with getting an overview of what’s there and where you can start making a dent. It makes little sense to search for a specific piece as there are many pieces to search through and you usually don’t have a lot to build on to. Depending on your strategy you may look for all the edge pieces or sort pieces by colour. There are more unknowns than knowns.
Then, the game will slowly move to the second phase when you have loads of clues to go on and fewer and fewer unknowns in the box. Often you will search for individual pieces to finish off a section you’ve been working on. You will look through the box, identify a piece that seems to be right and then test whether it fits where you thought it would fit. This method is more effective at completing the puzzle but needs some initial structure to go on. The closer to the end you get, the easier this will become.
This is product research in a nutshell for you. You first look for signals of what the bigger picture could look like. As you piece the product vision together you develop ideas, or hypotheses about how things fit. You will test how well they fit, and either move on successfully or go back to the drawing board to develop a better hypothesis.
Product research is based on gathering signals and testing hypotheses
There are two distinct processes here: The first is exploration —in our example that is getting an overview of the puzzle pieces. The second is evaluation — building a hypothesis about what piece goes where and testing that. This is what being data-driven is all about: Continuously exploring data from many sources and, as frequently as possible, evaluating your hypotheses. To maximise learning you want to do this in an iterative fashion. Exploration should stimulate evaluation and evaluation stimulate exploration.
To become somewhat savvy around research and data you may want to get a feel for how these processes apply to everyday life. Ask yourself what areas in your current situation need exploration and what ideas need evaluation? How good of an idea do you have of what the problems are? How close do you think you are to the solutions? And how often do you return to exploration once you have started working on a solution?
Think exploration and evaluation — everything else will follow
For a fuller understanding of exploration and evaluation you can think of them as using a fishing net (exploration) or a sniper rifle (evaluation).
Exploration: Like casting a fishing net, exploration is all about getting signal. It’s about throwing out a net, looking at your catch and forming an idea about what’s under the surface. You want to use a fishing net for several reasons: To stay in touch with the needs of your user base, to know what kind of technical issues your product has, what competitors are doing and most fundamentally, to avoid having large blind spots just about anywhere. You can go fishing with advanced tools and techniques (usage logs, heat maps, web analytics) but also on an absolutely basic level (Starbucks testing or community forum analysis).
Evaluation: Like sniping a target, evaluation is about building, testing and rejecting hypotheses. You will need a higher level of preparation, more rigour around your accuracy and you need to know precisely what you’re looking for. Using a sniper rifle without a clear target (hypothesis) is neither effective nor efficient. Using a sniper rifle is required if you want to test how well a product change is performing, how usable your latest prototype is or if commercials are impacting the experience of your new users.
I strongly urge you to be precise about what your goals are. Write them down and make sure you don’t forget about them over the course of the project! What do you want to explore and what type of signal are you looking for? And what are the hypotheses you are evaluating? In the product development context it’s easy to not take a clear stance: ‘Let’s just put it in front of some users’ or ‘We’ll launch and look at the data’. Those sentences are a clear sign that you’re messing up on the most basic level. Using both evaluation and exploration on the same research study often makes a lot of sense. But at all times make sure you are clear about what you want to explore and what you want to evaluate and analyse both separately.
Explore with a fishing net and evaluate with a sniper rifle
Thinking about whether you need a fishing net (exploration) or a sniper rifle (evaluation) also helps you estimate the required scope of your endeavour. Evaluating with a sniper rifle will always take a certain amount of time, depending on the sophistication of your method, and you can quite confidently say in advance when you should start setting up. Exploring with the fishing net on the other hand can be something you do spontaneously by yourself if you know how and where. Or you can plan an expensive and lengthy expedition if you want to explore areas that you have never visited before.
Talking exploration and evaluation will also help your communication. Not everyone has the same definition of technical terms like ‘qual’, ‘subjective’ or ‘longitudinal’, especially when you are working with people from different backgrounds. There are a dozen different versions of the graph below and many disagree with each other. If I had a nickel for every time someone misuses the words ‘qual’ and ‘quant’ I would be writing this article from an estate on Tahiti. Keep it simple!
The last major point I want to make is that thinking about exploration and evaluation first will also clear up things tremendously when doing the actual research and analysis.
Let’s look at user testing sessions for example. You can set them up in either an exploratory or an evaluative way. Want to fish for unknown issues? Focus on qualitative recordings, mix objective (behaviour observation) with subjective (think aloud) and go with a rather small sample — you will find the most frequent issues with just 5 users.
If you want to evaluate the product with a sniper rifle: Set up your hypothesis and then perform a more rigid task-based test and calculate the percentage of success with a confidence interval around it. Let your desire for accuracy define your sample size. Do smaller sample testing (n=5–15) if you don’t need to be extremely accurate in your evaluation. If you do need that accuracy (due to a very specific hypothesis or to compare two prototypes) run a bigger user test or go for a remote testing approach.
The key take away here is: The same method can be applied in different ways — depending on the specifics of your goal. If you really need quantitative data you add some measurement. If you want to do a before-after comparison, repeat the study after a certain time. This is where it gets complex and where you want to consult an expert or do some serious reading.
Yet, to start and guide this process you first need to know what it is you want to do: Explore or evaluate.
Methods and setup should be driven by your approach — not the other way around
The beauty of approaching data through the lens of exploring and evaluating lies in that it spins a red thread from company strategy to product road map to method selection. If the company or your project is mainly about continuous improvement your process will likely look like this: You will do concurrent exploration. If you’re concerned with optimising a shopping website you may regularly look at quantitative click maps or funnels to see where your users drop or what their browsing strategies are. Or you do user testing and see where they get fed up or how much trouble they have moving through the checkout. From those observations you will generate hypotheses and evaluate them in intervals that are based on your release cycle.
If, on the other hand, you work on some discontinuous big bang innovation you’ll likely board a modern fishing trawler and take some time exploring those unknown exciting waters (e.g. by doing diary studies or direct field research). Say you want to innovate by providing a dating app for cats. Chances are you don’t have much data on that and need to do a bigger field study to investigate what cats care for in a good partner. You will end up with a lot of signal and many divergent hypotheses. Should your app focus solely on pictures or do cats respond better to audio — or video? You can then proceed to evaluate these with small targeted shots (prototype evaluation) to weed out inaccurate hypotheses (a process that some call de-risking). In our example you may learn from the prototypes that cats get easily bored by videos so you decide against that. Your main launch will ideally take the form of a very precise evaluation of your remaining few hypotheses. You may launch two final variations of your product and then quickly kill the one that performs less. Ideally you’ll also start running exploratory research from day 1. This is so if your evaluation misses its mark you can instantly look at the available signal and adjust your hypotheses for the next shot with relatively little delay.
Don’t get into the rut of doing only exploration or evaluation. Going exclusively for exploration means that you will generate a lot of ideas but never actually test them with rigour. This may lead to overconfidence in the things you believe to know. Doing only evaluation on the other hand will keep your pool of ideas very limited and you may miss out on bigger opportunities.
You will encounter lots of people who talk about ‘qual’ and ‘quant’ or ‘subjective’ and ‘objective’ right away. Don’t let that confuse you. All those data attributes are important to consider once you’re planning and running the research. But first you need to know what your approach should be.
After all, there is no point in bringing a sniper rifle to a fishing trip.
Thanks to Ashley Smith, Gareth Holder, Rochelle King, Julian Kirby, Lukasz Twardowski, Rahul Sen and Sophie Albrecht.
- Jigsaw Puzzle: Curt Smith
- Germanwings flight recorder: Source: Bureau d’Enquêtes et d’Analyses (BEA, France, www.bea.aero)
- LHC CMS detector: 2008 CERN, photo: Maximilien Brice, Michael Hoch, Joseph Gobin