Mapping research impact in product development

Jake Burghardt
Integrating Research
10 min readApr 17, 2023
Where is research making a difference? Which areas of product development should we target next? Beneath this text, an abstracted heat map diagram has categories of product development: Foundations, Vectors, Prioritization, Plans, Construction, Validation

In a recent workshop, I shared a model for mapping a research community’s current and desired impact on product development. It’s about looking across the research that’s been delivered to date within an organization, at a meta-level.

I presented the model in the context of defining requirements for research repository initiatives — but it has other strategic uses, such as:

informing roadmaps that will move a research community toward more impactful roles in product development, or

highlighting where to build new collaborations and processes in order to grow desired impacts.

Sharing this ‘impact mapping’ idea here to explore the approach a bit further…

So many different finish lines

It’s an overstatement that if you ask 10 researchers how they define research impact you get 10 different answers — but, in my experience, it’s really not that much of an exaggeration.

In connecting with product organizations about research impact, I’ve heard wildly different definitions of success. Everything from “make sure my team thinks I’m indispensable” to “point to product launches initiated by specific research insights in our repository.”

Building upon Victoria Sosik’s great approach, I’ve started defining impact both at a high-level and with some more detailed tools. At the high-level, a working definition is:

Research impact is the informed action that builds meaningful product plans and outcomes, organizational changes, and individual careers .

Impact may occur long after a study is complete, and it’s valuable to understand at the level of the individual contributor, team, research discipline, and broader research community.

Beyond this high-level framing — down in the details — there’s a ton of variation in different organizations. Researchers at a tech startup can focus on immediate and visible impacts from their fluid work, but find they are never moving ahead in important areas. On the other side of the continuum, the research community at a tech giant may have wins within their established niches of responsibility, but struggle to understand the possibilities of broader, more distributed impacts.

The remainder of this article gets into sense-making tools that research communities and leaders of researchers can use to chart meaningful destinations for their own particular contexts.

When possible, outputs over leading indicators

When folks talk about tracking anecdotes and tracking research impact, some are more focused on leading indicators — highlighting various forms of internal progress — and others are more focused on tracking end-result outputs that meet end-customer, business, and societal needs.

Leading indicators

Represent key groundwork:

  • Throughput — Stakeholders served, Questions addressed, Types of questions asked, …
  • Engagement — Collaborations, Participation of research, Sharing research, …
  • Internal Perception — Leadership satisfaction, Role in projects and processes, …

Output factors

Speak the language of your organization’s leaders:

  • Product plans (features proposals, designs, backlog items, etc.) — Initiated, Shaped, Deprioritized, …
  • Product launches — Initiated, Shaped, …
  • Improvements in core measures — Revenue, Job-to-be-done completion, Customer satisfaction, Impacted populations satisfaction, Negative uses averted, …
  • Internal changes to drive product outcomes — Operations, Organizational changes, …

These different lenses on impact are tools that we can leave in the toolbox — or deliberately pick up with the expectation of some hard work. And like any tool, they are best used in certain situations. Most notably, research communities working on unlaunched products are unable to use some of the output factors listed above. However, I typically encourage teams to track some output factors whenever possible in order to more closely connect research flows into the structural work of product organizations.

But also keep in mind that tracking impact always has an accounting cost, with some beans taking more work to count than others. Collecting anecdotes or qualitative data for these factors can take as much work as running a large, recurring study. And quantitatively measuring everything isn’t feasible or valuable, given most organizations’ staffing levels.

Up next, we’ll consider how to narrow in on where to track — and invest in growing — research impact.

Three categories of impact

Through Brigette Metzler’s writing, I learned about the work of John N Lavis, Dave Robertson, Jennifer M Woodside, Christopher B McLeod, and Julia Abelson, who developed a particularly useful framework for categorizing impact: Conceptual, Symbolic, and Instrumental.

Sounds a bit too academic? Bear with me — it’s worth the friction of some new terminology. While the authors’ focus was on research impact in Health Policy, their framing is just as applicable to iterative product or service development.

Conceptual impact

Providing new understanding that could shape multiple product decisions.

“…a more general and indirect form of enlightenment…” Lavis et al.

Important but inherently fuzzy, conceptual impact can be the easiest type of impact to claim based on stakeholder anecdotes. However, it can also be difficult to trace to specific product or organizational outcomes that prove value, so it’s usually not the whole story.

Examples of what you might hear for this type of impact:

  • “After watching the overview on that customer type, I brought a different perspective to every meeting.”
  • “That model and the insights behind it give us a lot to think about as we’re going into our next planning session.”

Symbolic justification impact

Providing ‘data’ used to support a decision in flight — which can be a good thing.

“…used to justify a position or action that has already been taken for other reasons…” Lavis et al.

Although researchers may feel it’s a bit ‘cart before horse’ when stakeholders use research as justification for an idea that’s already underway, it’s often a case of ‘great minds think alike,’ with research evidence aligning with important assumptions and hypotheses. Citation can be inherent to this kind of impact, making traceability possible — especially when there are publicized standards for how to reference research.

There will always be some cases where stakeholders ‘cherry pick’ ill-fitting research evidence to push their ideas forward. When these cases arise, leadership escalation and visible questioning about the conclusions can help — or at least provide cautionary tales for others considering ‘cherry picking’ for their next efforts.

Examples of what you might hear for this type of impact:

  • “This research illustrates why our current plan addresses a real customer unmet need.”
  • “We were planning on updating how we track beta feedback. Post-launch research also confirmed the need to revisit our process.”

Instrumental impact

Providing the impetus to go after a product effort — or to reshape and clarify it.

“…acting on research in specific and direct ways, such as to solve a particular problem at hand…” Lavis et al.

While it can take intensive follow-through and ownership to drive research-based outcomes, instrumental impact is where research can truly claim the victory lap. Beyond insight activation during the study timeline, traceability between research and action has to be actively established , and researchers may need to politely push into planning and design conversations where they are not normally included.

Examples of what you might hear for this type of impact:

  • “Based on research insight X, we are designing and creating a plan for feature Y.”
  • “We heard from customers that X was a real challenge. With feature Y, we’ve got you covered.”
  • “We’re building out standard design components for this area based on research insights A, B, and C.”

Categories of impact matrixed into product development

When we take the three types of impact and matrix it with some basic stages of product development, we create a topography to understand where research is landing within an organization — and where we might steer it next.

Here are some basic targets for research impact across product development — which you should feel free to break down differently for your own context:

  • Foundations — Customer knowledge
  • Vectors — Strategies and goals
  • Prioritization — Roadmaps and backlogs
  • Plans — Definition and designs
  • Construction — Iterative refinement
  • Validation — Understanding performance

Matrixing these with our categories of impact results in a map to fill in as a research community or stakeholder group:

A matrix with targets for research impact in product development across the top, and conceptual, symbolic justification, and instrumental impact along the side. The key has impact level, with three levels of impact represented as colored circles: 1. Excel, which is red-orange; 2. Inconsistent, which is orange; 3. Early indicators, which is yellow, and 4. blank, representing no impact known. The size of the colored circle within a square can be used to represent org chart breadth of that impact.

Within this matrix, we can heat map the amount of research impact in a given square…

  1. Excel — Ongoing, consistent impact
  2. Inconsistent — Seeing impact, but not as consistently as desired
  3. Early indicators — Some initial successes to build upon

… using size of circle as a rough indicator of organizational reach:

  • Large — Broad organizational reach
  • Medium — Impact spread within a silo
  • Small — Limited impact radius

When rating each square in the matrix, think of the impacts across research to date; don’t think about one particular study. So, for example, a big generative study may have had conceptual impact on how features are validated, even though the actual methods at the time of validation could be A/B testing or something similar.

After we have mapped the current state, we can pick the next areas where the research and stakeholder communities want to see more impact — and start generating conversations, ideas, goals, methods, and processes to get there.

And just like that, we can have a game board to overlay evidence and desires in order to have a concrete and strategic discussion about the value of research in an organization.

Example mapping: Hired for usability, moving upward

In this case, a research community is seeing all three types of impact in the construction phase of product development, and has changed how teams think about validating new offerings. Their research work has trickled up stream as conceptual understanding across product development, and this community has seen some early wins in being referenced in plans and designs.

A matrix with targets for research impact in product development across the top, and conceptual, symbolic justification, and instrumental impact along the side. Within the matrix, squares are color coded in a heat map to visually reflect the article description of this case in the preceding paragraphs.

Moving forward, the community wants to see their research referenced as symbolic justification for why features are prioritized in the first place, and they want their evidence to drive more definition and design particulars, landing consistent instrumental impact in these areas. Lastly, this community wants to expand the reach of its instrumental impact in the constructive phase, doubling down on their current wins in this area.

To move forward, this research community will need to find new touch points for their work in more prioritization and planning conversations.

  • To prepare, they might organize their research knowledge in order to improve referenceability for product people — getting their repository houses in order by standardizing report templates and connecting into a larger product knowledge base.
  • They may then work to rally leadership support, sharing early indicators of research impact and asking for buy-in to tweak product planning processes.
  • With buy-in growing, this community can then start leadership reporting on targeted product efforts that are using — or not using — research, with a particular interest in the work that was initiated or substantially shaped by customer insights.

Example mapping: Hired for conceptual strategy, increasing instrumental impact

In this case, a research community is excelling at conceptual impact at the front end of the process, and seeing some inconsistent symbolic justification impact up front as well. When it comes time for product teams to prioritize and define plans, their research is having some inconsistent conceptual impact. They are also seeing early indicators of impact in a range of other areas; in particular, they have seen some inspiring examples of how their work has driven instrumental impact in strategic goals and roadmap items.

A matrix with targets for research impact in product development across the top, and conceptual, symbolic justification, and instrumental impact along the side. Within the matrix, squares are color coded in a heat map to visually reflect the article description of this case in the preceding paragraphs.

Moving forward, the community wants to see more instrumental impact for vectors and prioritization decisions, building from the conceptual and symbolic justification impact that they’re already landing — and actually driving more new initiatives.

To move forward, since this research community already has relationships in the front end of planning, they might work to drive new accountabilities with those leaders.

  • To enable those accountabilities, they might manage their insights differently in order to push a digestible point of view on priorities, while enabling citation of specific research in planning work products.
  • Along these lines, they might drive more ideation exercises at key moments in the planning cycle.
  • They may also seek to include new, research-based factors in product prioritization criteria.

Building forward from current impacts

Taking a step back to take a broad look at where collected research has made an impact — and where internal communities want to see more wins — can allow researchers to reframe their work and get off the made-to-order ‘insight assembly lines’ that teams often find themselves on.

By stepping out from our own disciplines and mapping impacts with other insight generators in your organization, you can find new opportunities to connect dots and build shared impact as a multi-threaded research community.

To carry this impact mapping exercise forward, you can identify the leaders who can help make your desired impact areas a reality, and start generating proposals for changes to research planning, knowledge management, product ownership, goal setting, planning touchpoints, and more — all of which are topics for future articles in this series.

If you’ve read this far, please don’t be a stranger. I’m curious to hear about the impact framing and tracking methods that have been useful in your work. Thank you!

Connect on LinkedIn
Sign up for email updates (monthly, at most)

Selected references:

--

--

Integrating Research
Integrating Research

Published in Integrating Research

Articles on research repositories and better integrating streams of research into product planning. Tech organizations are acting like labs without collective notebooks, unlocking only limited value from their research investments. Let’s get more done with research insights.

Jake Burghardt
Jake Burghardt

Written by Jake Burghardt

Focused on integrating streams of customer-centered research and data analyses into product operations, plans, and designs. www.linkedin.com/in/jakeburghardt