Extending insight ‘shelf life’ to get more value from research in product planning

Jake Burghardt
Integrating Research
16 min readMay 3, 2021
How will you characterize and reactivate ‘old’ research to reduce waste? Options presented along a dimension, starting with ‘faster expiration’ and ending with ‘shelf stable.’ Design prototype studies. Interpreted behavioral analytics. Quantitative market research. Top priority insights. Foundational unmet customer needs.

Overview: Even as tech organizations have a recency bias toward their latest research and analysis outputs, some of their most valuable insights may in fact be ‘old.’ Unused research builds up as waste across efforts, as core insights from studies are not activated within the conventional timeframe of a research project. Leaders who want to get more value from their research investments can develop rules for research durability in order to better curate their existing, still-applicable insights. Teams can then work to repeatedly activate curated sets of existing insights within product planners’ distinct timelines.

Look at all those shiny new insights

Research and data analyses are best used when they are brand new, right? Or should we be using “old” research to generate product ideas and prioritize our roadmaps?

“This just in” can be a powerful frame for garnering attention in product teams. After all, product owners receive excessive amounts of communication. In their rapid triage of attention, “because this research was just completed” is perhaps the easiest reason to parse, generating interest in new learning.

And although they may not be ready to admit it, researchers of different stripes are part of the problem. Within organizations, different disciplines and teams that generate customer-centered research are often major contributors to information overload. Various research initiatives can too easily create disconnected streams of “the latest” content, vying for attention.

What about when research isn’t ‘hot off the presses’?

In your own work, you may have noticed that the ongoing availability of new research, including analyses of behavioral data, can not-so-subtly shift perceptions about which info is most valuable for product planners (product management, design, marketers, etc.). Especially when you factor in organizational cultures that focus on the perpetual newness of business disruption and tech innovation. Or when you add technologists’ underlying mental models about logs, feeds, and continually updating databases. “This dashboard shows up-to-the-minute results, so why doesn’t that qualitative research program have fresh data on that topic?”

Overall, there’s often an expectation that research should only be used in product decision making if it’s recent. Applied without nuance, this expectation is wasteful, and it does not benefit organizations or the people they are striving to serve.

Many types of customer-centered research involve in-depth data collection and analyses that are not real-time. When thinking at the speed of product development in tech, some types of foundational customer research studies are conducted relatively infrequently, even in large organizations. And certain research topics may only ever receive one dedicated study, given that there are just so many other important topics to investigate.

Discounting gold-standard research solely because it isn’t hot ‘off the presses’ is a common root cause of research being wasted. And wasted research leads to decreased advocacy for what really matters to the people an organization is striving to serve (see post 1).

Years-old insights may be your most valuable research

This idea of wasted research isn’t academic. I’ve seen massive experiment wins, for core product measures, based on years-old customer research that had all but been forgotten. These old insights were resurfaced from a SharePoint folder as part of a larger effort to integrate past research (see post 2). Once resurfaced, designers and product managers judged the insights to still be applicable. Despite the passage of time, not much had actually changed in the feature area, or related customer behaviors, since the studies were originally conducted.

The moral of the story? Well, for one, if your organization’s been around a minute, then there’s gold in all those long-lost network drive folders, wikis, and dashboards of past research reports and analyses. For another, most anytime is the right time to conduct foundational research, because it will likely have staying power that can fuel product planning for years to come.

I take it as a given that most every team I start working with has an avalanche of underutilized research reports, analytics outputs, surveys, customer feedback, and more. Most everyone agrees in principle with the textbook idea that products should be fueled by insights about current and desired customers — but as for actually going back to look at all of that accumulated customer-centered research?

Looking forward — toward charismatic ideas about the next new thing — can too easily take precedence over looking backward to take stock of existing, evidence-based customer needs.

When asked about the notion of re-activating past research as a project on par in importance with gathering new data, some teams immediately recognize the potential value. Others, including some researchers, are less convinced. Due to timestamps of last year, or even last quarter, many teams are willing to immediately discount research that is directly applicable to their areas. The march of new data collection continues.

That’s not to say there aren’t many cases where it’s crystal clear that certain past research is no longer relevant for current challenges. Clearly, not every piece of research should be reactivated in an organization’s inventory of currently applicable learning.

But let’s not throw the ‘you could make people love your product if you tackled this existing insight’ babies with the ‘yes it’s true that particular insight is out of date’ bath water.

So then, why does this unused research waste build up in the first place? Where did these ‘research person years’ of unused-yet-invaluable insight come from?

Mismatches between the timing of research and planning decisions

There are many sources of research waste, so let’s narrow down the field a bit. For now, let’s set aside depth of analysis, where insight waste builds up simply because more data is collected than is adequately analyzed. That’s insight juice left unsqueezed, which is a huge topic in and of itself [1]. And, since I covered it a bit in the first post in this series, let’s also set aside insight receivers’ varied responses regarding the power of a study’s sample size or the actual content of individual insights. Instead, let’s narrow in on timing.

The timing of research delivery and activation work is a primary reason why many insights are not applied to product plans in any way.

Researchers work hard to get the scheduling of their projects in sync with product planning cycles. That’s smart. But there’s more to research impact than delivering readouts of whole studies or analyses [2].

Image shows a set of insights from two different studies and their “research activation timings.” One insight has a read out and is addressed. The others have a readout activation, but do not drive change because team design points occur without related insights at hand.

A big factor in whether research learning is given real consideration — or becomes more legacy research waste — is whether or not individual insights from current and existing studies arrive to the right audience at the right time.

In more cases than anyone would like to admit, report delivery and activation work at the end of research projects are a case of the right insight at the wrong time, or to the wrong audiences.

And even when core insights land in the team who initially sponsored a research project, they may arrive at a time when that team is not actually ready to create plans.

Not actually ready to create plans can look like total agreement on the importance of resulting insights, and the best intentions to make change — but, given other cognitive load and a focus on more immediate deadlines, no changes to roadmaps or additions to the product backlog.

Improving sync with planning at a more granular level

Well then, what could a closer sync between the timing of research and the timing of planning entail?

Getting in sync with product planning requires activating clusters of research insights repeatedly, on planners’ own timelines for thinking about particular topics, which can extend far beyond the traditional timeline of a research study.

There’s no one recipe for getting in better sync, because every organization (and even every team) has different approaches to defining, designing, and prioritizing what’s next. To be successful, leaders need to figure out how to better activate their accumulated research and related proposals at primary decision points. These decision points may occur in somewhat predictable planning cycles. Or they may occur as ad hoc fire drills. Or when research-focused leaders inject collections of unaddressed insights. Or they may be a mix of all the above.

So it’s not hard to imagine then, especially in a complex organization, that it’s challenging to nail down consistent inputs across major decision points. A variety of strategies are needed. For example, working with research champions in leadership to shift organizational culture toward the belief that product plans from any discipline are more compelling if they are grounded in direct references to existing research.

Better integration of research into planning is about collectively shifting cultural expectations. From “What’s the latest from research?” to “What core research insights have we not planned against yet?” From “New insights from research are used around the time they’re reported” to “Our research is a vital, durable commodity that helps us make better planning decisions.”

Getting to collective “what have we learned so far” understanding requires researchers to invest time stepping up out of their silos and disciplines in order to connect the dots across the customer journey (more on this in upcoming posts). This investment requires research leadership that’s focused on making researchers “integral planners” rather than solely a “production line” of insights or “client service” providers (see post 2).

Research knowledge management, including repositories, can enable distributed ownership of research-informed planning (see post 2). Research repositories can become hubs for collating streams of learning so that it’s more accessible when needed.

In addition to being destinations that insight-seekers can visit on their own accord, repositories can also serve up repeated echoes of insights into planning processes. They can enable leaders to campaign for evidence-based customer needs through new internal marketing and research activation processes. These communications can help shift organizational norms toward a culture where visibly incorporating existing research is viewed as essential to good planning practice.

Image shows a set of insights from two different studies and their “research activation timings.” Each insight has different activation timings. Each goes through an initial project readout, then a cadence of recurring connection around insights, leading to integration into project teams’ own decision points, on their own time.

But wait, if we’re syncing older insights to product planning, how do we decide what’s out of date?

Taking this idea of better syncing specific insights to distinct planning processes — and the earlier point on some research being highly durable — let’s return to thinking a bit more concretely about managing research shelf life over time.

After all, as more existing insights get connected to product plans, people advocating for accumulated learning and recommendations will inevitably run across research that, as its documented, is clearly past its expiration date. Research that fits the organizations’ data policy and governance rules, but is simply no longer applicable for the products in question [3].

And as part of bridging to a mindset that makes existing insights more integral to planning, your organization will benefit from a shared point of view on which types of research are more or less durable over time.

Note that the remainder of this post gets into some explorations on the “how” of research durability, which you may be more detail than you want right now — or may be just what you’re looking for. First up, we’ll look at different characteristics of research that could be used to assess durability. And then we’ll close with different approaches for curating shelf life as a distinct activity.

Assessing durability by research characteristics

Below are some research aspects that can drive rules of thumb for durability, along with some example rule ideas [4]. Durability rules can act like filters to push research content to different states, such as out of the ‘current’ learnings spotlight, into a ‘to be reviewed’ state, or even into a deep-freeze ‘archive,’ retained for understanding changes over time.

Thinking through these five aspects may drive a more nuanced conversation about which research in your organization is ‘current’:

Categories of rules for research durability in a four by four matrix. The horizontal access ranges from simple metadata to custom metadata standards. The vertical access ranges from generalized to nuanced. Starting in the lower left quadrant and moving up and to the right: Date of research, Research type, Audience and market changes, Severity and priority, Topics investigated.
  1. Date of research
    As in when data was collected or reports were delivered. The simplest rules to extend research shelf life simply standardize longevity by naming a generic expiration date [5].
    Example shelf life rules…
    -
    Research is considered ‘archived’ 3 years after its delivery date and should not be referenced — though individual insights can be manually promoted to ‘current’ from ‘archived’ content.
    - Research is moved to ‘needs review’ state after 1 year and should be reassessed prior to applying.
    - Note that date is also combined with other aspects to create more targeted ideas for durability rules, below.
  2. Research type
    As in methods, including quantitative or qualitative as well as whether its a continuous program or episodic data collection.
    Example shelf life rules…
    -
    Insights derived from our behavior analytics dashboards are considered current if they link to the most recent dashboard data in order to support updates and revisions.
    - Quantitative market research on our current user base will be considered ‘needs new data’ after six months.
    - Exploratory research on unmet customer needs should be considered durable unless proven otherwise, on a case by case basis.
  3. Topics investigated
    As in different areas of investigation and types of research questions.
    Example shelf life rules…
    - Research about fast-evolving feature areas should be considered as ‘needs review’ after six months and ‘expired’ after one year.
    - Insights on prototypes should be treated as expired after 6 months, unless they can still be found in production or can be up-leveled into more generally applicable insights.
    - Research about desirability of potential features should not expire until there is similar research to replace it.
  4. Audience and market changes
    As in segmentation, allowing research to be archived based on changes to people’s attitudes, behaviors, and other factors.
    Example shelf life rules…
    - Research focused on segments that we are no longer targeting should be archived.
    - Given that we’ve seen broad adoption of a competitor’s offering, earlier attitudinal data from prospects about our product should be considered expired.
    - We’ve revisited our personas; existing insights for the previous personas should be considered as ‘needs review’ to see if they still apply. [6]
  5. Severity and priority
    As in the relative importance of different discoveries for customers and the business, acknowledging that not all findings are created equal.
    Example shelf life rules…
    -
    Insights that are marked as critical or high severity should be considered ‘needs review’ every year to ensure that researchers are focusing product teams on the right areas.
    -
    Unmet needs marked as Top priority never expire and are only closed once they are judged to be fully addressed with launched experiments.
    - Research on deprioritized features should be treated as ‘closed.’

Please note that the above example rules are not intended to be held up as the ultimate ‘best practices.’ Instead, I’m trying to sketch a space of possibilities. And your rules may be very different, given what’s going on in your organization.

Like any rule set, you would only want to define as many rules as needed to keep the good stuff in rotation, without creating an unruly logic.

Once you pilot a starter version of your durability rules — and ideally introduce them by getting value out of a long-forgotten piece of research — the question becomes: how to apply your rules across amassed research content in your organization?

Approaches for applying durability rules to curate research

Depending on the size of your organization’s body of existing research, you might be thinking that going through and assessing durability sounds like a heap of work. It absolutely can be [7].

You’ll want to prioritize efforts curating research shelf life against other actions for integrating research and research operations [8]. For example, you might agree that curating shelf life is crucial for a new research repository to succeed, but still choose to start with a relatively low-effort approach to curation given your other efforts to increase uptake of research. (I’ve certainly been in those shoes before.)

Below are some ideas for curation approaches, starting with smaller efforts and extending to major projects:

Approaches for applying research durability rules in a four by four matrix. The horizontal access ranges from low breadth to high breadth. The vertical access ranges from infrequent to frequent.  Starting in the lower left quadrant and moving up and to the right: Stakeholder flagging, Meta-analysis subset, Researcher ad hoc, Recurring manual review, Rule-based automation.
  • Stakeholder flagging
    Letting stakeholders know that they can flag problems with the current applicability of particular insights. Address an insight? Let us know, and let’s celebrate it.
  • Researcher ad hoc
    During the course of their work, research contributors review the current applicability of any past research that they happen to access.
  • Meta-analysis subset
    Research repository owners and collaborators conduct a one-off, manual review of a subset of collected research [9]. For example, checking whether insights are still applicable while conducting a meta-analysis for a current planning priority, or reviewing all the insights rated as severe usability issues.
  • Rule-based automation
    Research repository system automatically applies durability rules based on descriptive metadata (date, research type, topics, etc.). Setup of this automation may require extensive effort, depending on your chosen tools, but could turn your durability rules into lived reality.
  • Recurring broad-scope, manual review
    Research repository owners review all research that is marked as ‘current,’ ensuring high quality of research currency. This heavy effort could be made smaller by applying rule-based automation as a first pass, creating buckets for different degrees of human inspection (e.g. ‘current,’ ‘current: needs review,’ or ‘closed: auto-archived’).

Regardless of how you get started, revisiting existing, shelf-stable research is a crucial opportunity to build groundwork in your organization. Groundwork for a culture of iteratively integrating research content into planning. Groundwork to ‘walk the talk’ of those claims of being a ‘product laboratory’ — actually becoming an organization that builds from a shared ‘laboratory notebook’ of evidence and ideas.

So, what’s your oldest, unused, potentially revolutionary insight about the people you’re striving to serve? How might activating that insight in product planning unlock new pathways for research impact?

In summary

  • Recency bias, when applied to research, results in wasted effort and decreased advocacy for what really matters to the people an organization is striving to serve.
  • Due to mismatches in the timing of delivery for specific insights and the timing of related planning efforts, your organization likely has ‘insight gold’ buried in your intranet junk drawers. This ‘gold,’ if brought back in sync with product planning, could drive crucial outcomes for your customers and your business.
  • Getting in sync with product planning requires activating clusters of research insights repeatedly, on planners’ own timelines for thinking about particular topics, which can extend far beyond the traditional timeline of a research study.
  • As research is brought up at a later time, there will inevitably be pushback on its currency, and many insights will, in fact, be out of date. You can build nuance in these conversations by developing durability rules based on different research aspects and then applying those rules to existing research content.
  • Within organizations, all of this work to keep still-applicable research alive can provide essential groundwork for a culture of iteratively integrating research content into planning. When you shift perceptions of research durability, lots of other great things can follow.

In the next post, I’ll look at combining different streams of research into an ongoing program of synthesized research:

‘Integrating different types of research in repositories: Ongoing mixed methods across organizational boundaries’

If you’ve read this far, please don’t be a stranger. I’m curious to hear about your challenges and successes increasing the shelf life of research. Thank you!

Connect on LinkedIn
Sign up for email updates (monthly, at most)

Footnotes:

[1] You might hear leaders ask things like: “Shouldn’t machine learning tell us what’s in all this unanalyzed data?” Often there’s an opportunity to build research literacy underlying those types of questions. But, in some cases, automated analyses can be a perfectly valid area to explore. While automated analyses that are fully bottom-up can’t usually get much juice out of customer feedback and raw research data, rules and classifiers grounded in human-centered learnings, as a form of mixed method research, are an exciting direction.

[2] To work with individual insights, rather than studies, requires deconstruction of a research project into smaller elements. This idea of documenting deconstructed research content has been gaining steam, but is still novel for many. A research report can be viewed as a container for a wide variety of individual insights and related ideas. Many of these individual elements could be looked at separately (or in smaller clusters), whenever they happen to be most relevant to a particular audience’s planning efforts. In an upcoming post, I’ll dive into the pros and cons of capturing different elements from studies, depending on your situation and goals. Earlier posts in the series (1, 2) reference others’ thinking in this space.

[3] On policy and governance: “Have a policy, either an existing one or one you create, on dealing with old information such as contact lists, personal details, photos, and videos — and act on it…” “Get the process of reviewing and updating the repository into your organisation’s workflow and processes.”
2020, Jonathan Richardson: “I built a user research repository — you should do the same” https://medium.com/researchops-community/i-built-a-user-research-repository-you-should-do-the-same-df680e140df8

[4] Pace layers are also an interesting way to think about the durability of research, as well as the timelines of different types of research methods:
2020, Brigette Metzler: “Leveling up your Ops and Research — a strategic look at scaling research and Ops”
https://medium.com/researchops-community/leveling-up-your-ops-and-research-a-strategic-look-at-scaling-research-and-ops-eec38133f7cc

[5] Searching around, it’s not hard to find research expiration date rules such as: “…a research study is considered to be outdated when it is over three years old due to market/economic and consumer behavior variations, demographic changes, and alterations to the product.”
https://segmeasurement.com/content/when-study-considered-be-outdated
Or in a more academic context:
“A troubling attitude seems to be taking hold in the scientific community. It concerns how far we should go back when searching the literature. Many researchers and reviewers consider research that is more than 5 years old — or even 3 — to be outdated and irrelevant. I have noticed that more reviewers, in their comments on a manuscript, are writing “out-of-date reference list,” to refer to lists that contain publications dating back further than 5 years.”
2003, L. Gottlieb: “Ageism of knowledge: outdated research.”
https://www.semanticscholar.org/paper/Ageism-of-knowledge%3A-outdated-research.-Gottlieb/52d209987b2e828b2c6e285070c11585d99f451c

[6] Example durability rule was sparked by this article:
2016, Kim Salazar: “Are Your Personas Outdated? Know When It’s Right To Revise” https://www.nngroup.com/articles/revising-personas/

[7] It’s worth noting that I’m skipping the very difficult step of centrally amassing various research content for review in the first place, to be discussed in future posts.

[8] On prioritization of curation in relation to other activities:
“Knowledge Management > Data Gardening” is only one part of the larger scope of activities captured as part of the Research Operations framework.
2018, Kate Towsey: “A framework for #WhatisResearchOps” https://medium.com/researchops-community/a-framework-for-whatisresearchops-e862315ab70d

[9] An analogous process in a clinical research article:
“(i) formulation of the research problems and questions; (ii) setting of parameters for the search and retrieval of studies; (iii) determination of inclusion and exclusion criteria; (iv) appraisal of the clinical relevance of findings; (v) selection of the findings that will be synthesized; and (vi) interpretation of the results of that synthesis.”
2008, Julie Barroso, Margarete Sandelowski, and Corrine I. Voils:
“Research results have expiration dates: ensuring timely systematic reviews” https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2364717/

--

--

Jake Burghardt
Integrating Research

Focused on integrating streams of customer-centered research and data analyses into product operations, plans, and designs. www.linkedin.com/in/jakeburghardt