Understanding the product learning landscape with a ‘snapshot inventory’ of all types of internal research — to get more done with insights
It’s insight groundhog day. There’s that ‘do we know anything about’ question again. Research teams in tech often scope the ‘we’ in these questions to themselves, diving into repository tooling to make their own insights more findable — without considering the breadth of different ways that their product organization learns.
First things first: tracking down all internal staff who generate insights, then compiling their past outputs into a common list. A simple compilation of completed reports can enable motivated insight seekers to find the inputs they need for current projects. ‘Snapshot inventories’ can also be used to characterize the opportunity for bigger investments toward ongoing research reuse.
Great researchers try to keep their fingertips on insights that they could later share and reapply. But even when teams are good about sharing their new insights beyond their immediate stakeholders (B2), individual researcher’s can’t effectively hold all the available learning in their organization. Before long, there’s too much to sort back through. And tech organizations constantly scale and evolve in ways that create new pockets of waste in the form of underutilized insights.
As someone trying to get more juice from existing research, one of my first steps is to compile a list of as many research outputs as I can find. Asking research colleagues for links to their intranet pages and ‘file pile’ folders. Starting with teams that are close to where I’m sitting and then working my way out, going full information omnivore. I meet with various insight generators along the way, including both dedicated staff and any product people taking advantage of tools to run their own investigations.
I try not to make it a big deal. I just start compiling the inventory in a spreadsheet, with some basic descriptive columns. When people tell me their research is no longer applicable, I ask if it’s for governance reasons, as in cases when outputs should no longer be used due to participation agreements. If reports are still deemed legally and ethically usable, but folks just think they are too stale, I add them to my list anyway. It’s amazing what happens when you put a ton of past research together and start to see patterns emerge. Patterns of how an organization is learning, what it’s choosing to investigate, and how it’s acting on different types of learning.
I’ve chatted with teams who choose a research repository tool and then consider how to populate it. That’s ‘tool first’ thinking, from overwhelmed and harried researchers who haven’t invested adequate time to know better.
I usually argue for taking an approach that’s the opposite of ‘tool first’ thinking: getting to know the research landscape in your organization, experimenting with pulling it together in simple ways, sharing consolidated outputs that demonstrate the value of ‘old’ research, and only then — and only then — starting to think about requirements for repository tooling.
Sharing out a ‘snapshot inventory,’ with some holistic reflection and pointers to key content, can become an important early step toward changing teams’ perceptions about the ongoing value of research. Shifting the idea that research is something that’s used around the time it’s conducted toward the notion that research often has ongoing value as an asset to inform product plans over time. Revisiting unaddressed problems to solve from ‘old’ research reports can serve as a reminder that it often takes multiple touch points with a customer insight before product teams actually find their way to prioritized action.
Pulling together a list of insightful outputs into a single document ‘location’ can also help forge new connections between different researchers and teams, allowing for visibility into common ideas and intents. In this way, a ‘snapshot inventory’ is another early step toward building collective identity and purpose for a broader research community — being seen together in one place, within a single ‘boundary artifact’ that each contributor and insight seeker can use in their own way.
Improving your insight operations
Get more done with your research community’s insights by:
- Getting the word out about your intention to capture a snapshot of available research
Since you will need inputs from a wide range of colleagues, find a succinct way to summarize your goal and share your intent via different channels. In order to round up a range of sources, avoid describing what you’re looking for in discipline-specific language. Create a low-barrier way to submit content, and create some urgency by setting a feasibly fast-paced timeline for finishing the snapshot.
- Tracking down streams and pockets of documented learning
Assume that what’s actively given to you is the tip of the iceberg of what’s available. Ask insight users about which sources they’ve applied to their product planning. Follow the veins of promising research to tools, locations, and people that may surface more. Follow the reputations of people who do research, regardless of their discipline. Start to capture insight-rich final outputs (that are still appropriate to use) in a list with the title, year, researcher, generating team, research methods, location, and other initial description.
- Sorting the inventory to frame what’s available and to de-emphasize some categories
When in the midst of creating one of these inventories, it may seem as if there’s always more. At some point, the ‘long tail’ of inputs starts becoming harder and harder to find. As your list scales and the end of your project timeline approaches, start exploring how to order what you’ve found. For example, you may decide to relegate obviously stale or incomplete content to a secondary view, or sort it lower in your list. You may decide to give secondary sources, such as industry reports, less emphasis than your organization’s own research. And, if you’ve ended up collecting pointers to partially or wholly unanalyzed data sets, you may want to separate those into their own category as well.
- Annotating for purpose and sharing the ‘completed’ snapshot
Before closing the loop on your initial communications by sharing the resulting inventory, consider adding columns that will drive understanding of the collected sum while improving the specific findability of individual pieces of research. There’s potential for a huge amount of deliberation and effort here, so scoping based on time is essential. Not enough time to develop a robust taxonomy; instead, a quick and limited amount of tagging. You may want to surface key insights and find lightweight ways to connect obviously related reports from across teams. When the end of the project timeline arrives, share the resulting ‘snapshot inventory’ with calls to action about how it might be used, as well as learning about the resulting whole and its limitations. Your snapshot is probably also worth a road show discussion with core product teams, in collaboration with related researchers, to provide another active touchpoint with essential yet underutilized insights.
- Bridging from a snapshot toward something more enduring
A snapshot quickly becomes stale as insight generators continue to report new learning. As teams put your snapshot to use, there will inevitably be questions about what will happen next. Snapshots can be repeated on regular intervals, maintaining the inventory as a bare minimum form of collective knowledge management. And these snapshots can be used to build justification for more formal research repository programs — ranging from more formal ‘Research Registers’ to more intensive ‘Insights Hubs.’
- Your idea here…
On the path from insight to product impact
Normalizing research reuse in an organization is part of being seen as having sufficient evidence by starting to integrate various research content. It’s also part of usefully articulating insights, as well as product teams achieving awareness of existing planning targets.
If you’ve read this far, please don’t be a stranger. I’m curious to hear about your challenges and successes rounding up research to increase reuse in your organization. Thank you!
- A3. Extending insight ‘shelf life’ to get more value from research in product planning
- B1. Growing research ‘impact radius’ by connecting learning to more internal product audiences — to get more done with insights
- B3. Fusing research teams’ roadmaps to enhance collaboration, efficiencies, and insight quality — and get more done with insights
- A4. [On Re+Ops Community Medium] Opening the gates: Addressing researchers’ concerns about broadening access to research repositories
- View list of all ‘Integrating Research’ posts (and upcoming topics)
- “You’ll be in a meeting, and someone will present their work, and they’ll email it, and then in 6 months, you see someone talk about something similar, and you think, I’ve seen that, where was it? Who said it? Where did I save it? Everywhere, every day, people are creating new knowledge, and even though the most privileged among us have access to all the world’s data, we still find it hard to learn anything. We still can’t find anything. We still wrestle with managing just the things we learn every day….” Brigette Metzler
- “I’ve been thinking for a while why it doesn’t feel right to me to jump straight into building or procuring a research library or research repository. Discussion of sharing research / findings / insights more effectively will often quickly turn to tools as solutions to this. But there is much to consider before you can have an effective research library, there is much infrastructure that needs to be in place… To me, having an effective research library or insights library, is one of the most sophisticated things we can do in the field of user research. And if you don’t have a mature research practice and a mature research ops practice, going straight to a tool for your library you may be trying to run before you walk…. Sharing research findings and insights more effectively needs to start by having at its foundation…a simple list (catalogue) of research done, so work can be accessed with just enough context to know that you are accessing the right thing.” Stephanie Marsh
- “Firstly, we found that the four different models of knowledge management systems were: A research register — often the place researchers start — effectively a spreadsheet cataloguing research that has been done, is in progress, or is planned for the future… An ‘insights ‘hub’ — generally prepared for a wider audience, including only de-identified data. These are used as a way of organising and prioritising user needs, pain points, insights, opportunities and other forms of synthesis from the research. They are often combined with links to the raw data, or links to research outputs.” Brigette Metzler, Bri Norton, Dana Chrisfield, Mark McElhaw