Over the past year we’ve invested heavily in improving our research practices, methods and tooling (Research Ops) in the spirit of helping our product teams get even closer to their customers and as a result, making quicker, better informed product decisions.
Last year we saw a huge uplift in the the number of conversations teams were having with their users (100% increase from 2016–2017) to the point where research is for most teams, part of a fortnightly (if not weekly) cadence.
Research is now considered a core part of how we do product development at Redgate and something everyone (Designers, Product Managers and Software Engineers) play an active part in; bringing different perspectives to what we see users do, hear them say or have personally experienced.
Despite our successes integrating customer research into the fabric of our development process it’s important to recognise that act of conducting good, regular research is a means to an end; and this measure in itself is not indicative of success.
The fundamental reason we conduct research is to help us make good product decisions. These regular conversations help to challenge our assumptions and internal biases, address the gaps and shortfalls in our knowledge and mitigate the risks associated with making decisions without the supporting evidence.
To this end, we have started to apply some rigour to the way our teams conduct, capture, analyse and share their findings; whilst still recognising the need to scale our teams’ research efforts and experiment with different methods and tooling.
Once we’ve identified a topic we want to investigate (or some broader area of interest), who we should ideally speak to and what we might need to learn first (our riskiest assumptions) we’ll set about the task of gathering data. At this stage we avoid any conscious attempts to analyse, interpret or make sense of this data — that will come later.
When conducting their research teams will typically flex between three discreet but overlapping (in some cases concurrent) modes of research:
1. Generative research: Broad in scope and nature. Less about the products and more about users; what they are trying to do/achieve (JTBD) and the problems they experience whilst doing so.
2. Evaluative research: Converging on an idea in an attempt to find problem-solution fit. Understanding how a given concept would address the needs (and to what degree) of those who experience the problem.
3. Usability testing: Narrow in scope. Focused on a specific part(s) of the products’ interface and its ability to support a given task. Is it simple and intuitive? Do people understand it and can they easily use it?
In a number of cases teams will double-down on a research opportunity to include both generative and evaluative modes of discussion within the same research session. These ‘duel-track’ discussions will often see the conversation start in much broader terms; seeking to learn more about the business, users and their needs; before switching to some more detailed validation of a new product idea.
We’re now moving towards a model that sees teams regularly engaging in all three modes of research; looking for new opportunities, validating product ideas and testing the latest implementation. This approach ensures that teams can quickly and repeatedly ‘close the feedback loop’; ensuring we never go too long without speaking to the people for whom we design and build software.
Other sources of data
At Redgate we often draw on other sources of information in addition to our regular research activities. Product feedback and ideas will also filter in from the various channels through which Redgate interacts and converses with customers; including:
- Support tickets
- Sales feedback
- User forums
- UserVoice request
- Uninstall feedback
As with any information we gather from or about our uses, these serve as signals or inputs and won’t necessarily translate 1:1 into product decisions because:
- We create mass-market software, designed with broader consideration of what our customers need; as opposed for catering for individual, bespoke requests. What’s right for one person is unlikely to be right for everyone else.
- It’s our job as product people to decide what to build. We’re looking to better understand the problem and whether it’s the right problem, as opposed to someone’s interpretation of what a good solution might be.
When planning for and conducting research the efforts of our teams typically fall into one of three categories of qualitative methods:
1. Interview: High frequency, lower fidelity. A lot of our users are very willing and enthused to speak to us and since many of our customers are in the US, our primary mechanism for gathering data remains to be an interview-based phone or video calls.
2. Observation: Lower frequency, higher fidelity. Teams are now charged with ‘getting out of the office’ more frequently, such that they can observe users in context; to understand more about their environment, processes and who they interact with in their line of business.
3. Empathy: At times we also practice empathy activities like an internal version of ‘day in the life’ study; designed to allow teams to experience first-hand the situations and challenges of our users; as well as encouraging regular exposure to and usage of our products internally.
We endeavour to use these methods at the right times, depending very much on what it is we need to learn. We also believe that combining methods gives us the best chance of discovering something genuinely new and insightful.
For example, phone-based interviews can often reveal a lot about what people do (or say they do), but we also seek to augment that with the fidelity and context that we would only get from observing people in the field.
Data vs. Information vs. Insight
As teams go on this learning journey the hope is that they would discover something truly insightful, but this first requires a process of analysis and synthesis; without which there is a every risk that they become lost in a sea of data.
To avoid procrastination and provide some pathway through an unordered mess, we go through a process to make sense of chaos; organising our thoughts and observations, identifying patterns and finally extracting meaning.
At a high-level our process from data to insight looks something like this:
- Gathering: Collecting data from the various sources we have available.
2. Organising: Bringing relevant data together for the purposes of analysis.
3. Synthesising: Spotting trends/patterns through clustering and analysis.
4. Forming: Extracting meaning and insight as a result of trends/patterns.
5. Framing: Articulating the context, problem and desired outcome.
Remember, data is not insight…it’s not even in the class of information until we’ve understood more about surrounding context. This is a process of funnelling; starting big and broad and narrowing down to handful of key insights, through the application of context and inference of meaning.
So what makes for a good insight?
When attempting to form insights we generally look for the following characteristics, which serve as signals that we might have learnt something new, useful and hopefully actionable.
- Reveal the unknown: Is there something new or novel we have learnt as a result of this insight?
- Address assumption(s): Does this new knowledge help validate/invalidate one or more of our assumptions?
- Enable progress: Will it answer a pressing question or help us make a quick product decision (e.g. tie-breakers)?
- Inspire action: Is there a plausible something we could/should do as a direct result?
- Present opportunity: Is there an opportunity here for us to innovate/be more innovative?
Types of insight
Insights can come in a number of different shapes and sizes. Some will relate to the market and our understanding of our customers, some relate to the problem space and some relate to how users engage and interact with existing solutions.
Some examples of different types of insights include:
- People: Behaviours, attributes, needs and values of different cohorts
- JTBD: Context, causality and outcomes that inform purchase decisions
- Pains: Frequency, severity and commonality of problems encountered
- Behavioural: Patterns, trends, anomalies in current user behaviour
- Experiential: Perceptions, feelings, emotions when using our products
As we start to gather these broader user-problem or job executor-job insights, we begin to assume a position of knowledge and from that, can start to populate a number of key, more strategic artefacts. These artefacts are utilised to help us:
- Identify, rationalise and prioritise new product opportunities
- Plan where we should focus our efforts to deliver the most value
- Understand what we might need to design to address users’ needs/pains
These key artefacts include a combination of personas, Job Maps (JTBD), Value Proposition Canvas(es) and User Journey Maps; where each has value and purpose in informing or augmenting our understanding of the scope, scale and significance of the opportunity.
As well as helping us find new opportunities, there is also a lot to be said for having a shared, team-wide understanding of this ‘foundational knowledge’; such that anyone external or new to team can reasonably quickly understand:
- Who they are designing their product(s) for?
- What are they trying to do/achieve?
- What problems they encounter whilst doing so?
- Why do users/customer buy or continue to buy the product(s)?
- To what extent does the product currently addresses identified needs?
This gives teams a solid platform of knowledge to build upon; where it’s difficult to know where we could go and what we should do next, if we don’t know where we are currently. Developing this shared understand and empathy ensures the whole team is aligned and can continue to make sound, customer-centric product decisions.
As we start to peel back the layers and uncover causality and motivation, we find that it makes less sense to think in terms of our current product silos and begs the need to share or cross-pollinate insights as between product teams and other parts of the business.
Where the jobs of users may well transcend the products and packages we have created in the past, this encourages us to think outside of the boundaries in which we typically operate; thinking more in terms of how we might integrate Product A with Product B or combine Product A and Product B to create Product C.
On the solution side, customer feedback and insights from evaluative research help us to not only find the right solution, but to identify the best version of that solution. We can quickly learn if an idea resonates with users (how well and to what extent does it solve their problem); using low-fidelity mockups and prototypes to cycle through a few different ideas at any given time.
While attempting to find a good solution, it’s plausible that we’ll also discover even more about users’ and the exact nature of the (or some related, but equally important) problem. This might lead us to question or reconsider whether this is indeed the right problem to solve, or whether our original understanding and interpretation of that problem was correct.
The act of showing users something tangible (in the form of a mockup or prototype) is a great, low-cost way to test three core types of bridging assumptions:
- That we’re solving the right problem for the right people
- That those people see this as something valuable and worth solving
- That the solutions we’ve identified is a good fit
Insights in the solution space would typically help teams know how best to proceed and in what direction; helping to answer the question of whether to continue the search for a good solution or whether to move ahead into execution.
We’ve come a long way with our approach, but we’ve still much to do to before we can confidently systemise our research practices.
We’re still experimenting with parts of the process and trying to make sense of how everything ties together; understanding how and where we might need to scale and adapt according to the product life cycle.
The tip of the iceberg is certainly in view when it comes to to surfacing and sharing insights between teams and departments; but we’ve yet to find a effective system for doing this in a way that is reliable, repeatable and scalable.
If this approach or any of challenges resonates with you, we’d love to discuss this some more and would welcome your thoughts, as well as some insight into your own respective research practices. We’re always happy to share our experiences and very keen to learn from best practice across the industry.