Image: Gego, Reticulárea (ambientación), 1969.

Integrating UX insights into product planning

Video + transcript of a talk on open research platforms delivered at the UX360 Research Summit in January 2024

Casey Gollan
16 min readJun 11, 2024


If you’re an iPhone user and you’ve ever sent a text message to a different kind of device, or vice-versa, you’ve probably experienced the “the green bubble nightmare”:

You have no idea if your message sent, your super-HD video shrinks to the size of a postage stamp, and all of your “haha” reactions get converted into robotic-sounding prose.

This is the UX of interoperability: it’s invisible when it works. But, wherever you see users experiencing confusion, fragmentation, and frustration when working across systems — you’re likely seeing a lack of interoperability.

Images: Messages Improved / “iPhone reactions are ruining group texts and irritating friends.” Slate

Today, I’m super excited to talk to you all about what interoperability has to do with user research. My name is Casey Gollan. I’m a board member of the global ResearchOps Community and manager of our UX Research Insights and Engagement team at IBM.

In 2024 research data remains siloed

My mission as a ResearchOps practitioner is to make it possible for every design decision to be informed by research and by data.

But in order for data to make a business impact,
it needs to reach the right stakeholders,
at the right time,
in the right place.

This is what got me looking closely at how research flows or gets stuck, as it moves between the platforms used by researchers, product teams, and executives.

I’ve found on a technical level that research platforms are really locking data in instead of doing the most important work: which is to integrate insights into product planning.

How research happens…in real life

If we look across the research cycle, you can see that researchers are producing, consuming, and generally just juggling a lot of data. Not to mention, many different kinds of data.

But research, as far as I’ve seen, is almost never tidy like this.

In real life, research is non-linear. It’s iterative. It looks a lot more like jumping between phases and jumping between tools. And increasingly, research is also continuous. Across the industry, we’re seeing a shift towards research happening at every stage of the product development process, and a push to democratize research by extending research practices to wider product teams and beyond.

Image: Adapted from Research Skills Framework led by Dave Hora and Tomomi Sasaki

Stop wasting research

UX research often still takes the shape of large initiatives with elaborate reports. So a research team might put in six months of work, and then it basically gets presented for an hour before the deck is dropped in a folder and never seen again.

It’s kind of like that saying, “If a tree falls in the forest, does it make a sound?” The answer is: I think so? But I don’t have any way of proving it.

I love how Jake Burghardt, who has a book coming out called “Stop Wasting Research,” puts it. He says:

“Let’s be honest: applied insights are often only momentary sparks experienced by small audiences. Product teams are unaware what others within their walls have previously learned, and the problem only grows as organizations scale.”

Atomic design and atomic research

Design organizations are about 5 years ahead in asking some of the same questions about how to work smarter when it comes to wasting design. Does it really make sense for every team to be designing their own type of button?

Brad Frost popularized this concept of atomic design, which has evolved into a whole universe of design systems. By breaking design down into its component parts, more people across the company can create things that are functional, accessible, and beautiful—and they can do it fast.

In UX research, there’s a similar kind of motion underway: shifting the emphasis from large deliverables to what Daniel Pidcock has called “atomic research”, and Tomer Sharon has called “research nuggets.” These atoms of research are the key data points, the concise but targeted insight, or the actionable recommendation—which is validated, de-risked, and ready to ship. Each of these atoms can have ripple effects as awareness spreads throughout the company.

Research platforms: behind the interoperability curve

To operationalize this new way of working, a huge number of web-based product research platforms have sprung up. These tools make it effortless to slice and dice research data. But if you look across platforms, you’ll find that the way they each organize research data is frustratingly different.

Today’s platform makers are tied up in competition with each other and have no agreements about what constitutes a research “finding” versus an “insight” versus a “story”.

Images: Figma, Sketch / Dscout, Maze, Condens, Dovetail, Great Question

Open research platform futures

What would an ecosystem of open and interconnected research platforms look like? There’s so much possibility here, so I’ll just touch on a few of these, highlighted here today.

A framework for open research platforms

Let’s define what an open research platform is. There are four key areas that you’ll want to pay special attention to:


Data portability means being able to get your data in and out of a system in standard formats, and to do it easily.

Jared Forney, ResearchOps Principal at Okta, wrote a great essay about his experience migrating his team from one research repository to another. What stood out to me—other than his thoughtfulness—was the 11-month timeframe.

The key word on portability is “easy.” Below is a snapshot of a few notes from various platforms about all the limitations you’ll encounter when you actually try and get your data out.

Once you start to dig in, you might be shocked at how slow, tedious, and lossy these processes are. So before you commit to a platform, look closely at the docs and make sure you’re not backing yourself into corners like these.


The second point is integrations. This is how you connect your data to other platforms. It’s important to check whether integrations are built-in, or require a third party, and whether they’re real-time or have limits.

You’ll find integrations advertised all over marketing websites. But, in reality, these platforms are doing a lot more to centralize and silo insights rather than meaningfully integrate them into the platforms where our collaborators work.

Shamsi Brinn wrote a great post about how she ended up using Jira as her team’s research repository. Her team ended up going with a tool which is really designed for software developers. She calls it the “imperfect best choice.” I love, in this example, that she prioritized going with a platform that would put research in front of the right people, and a tool that the organization was already actively using.

Where many research repositories fall flat is that they become dusty archives. They’re just not part of anyone’s day-to-day, even sometimes researchers.

Image: various research platform marketing pages / Using Jira as a research repository: pros, cons, and how-to by Shamsi Brinn


For extensibility, the key feature is having an API. This is what allows you to build whole new ways of doing research on top of—and across—platforms.

In 2024, there are still no robust APIs for popular software used for qualitative data analysis. Think: recording, transcribing, coding, synthesizing.

Of the platforms that do have some form of API. You’ll find partially available data and broken outputs.

There’s also a lot of promises that more functionality is coming soon…but I’ve been looking at these same promises for years now.

To be fair, some of these companies have just sprung up. But, if we look at companies like UserTesting, they’ve been around for 17 years and Dovetail for 7.


Finally, sustainability. Even successful platforms disappear.

Last year, two of the largest research platforms were acquired by the same private equity firm and a plan was announced to merge these two offerings. Colleagues across the industry have shared with me that this has been a frustrating and uncertain experience as investment has shifted away from improving the existing platforms.

But, venture capital is not the only way to create and maintain software. There are other models, like open-source and open-core software.

On the slide below, you’ll see about 30 software brand names.

On the right is a list of open-source tools. Some of these are super robust, while others are a little more experimental. But, in any case, you’re in a very small minority of power users if you’ve heard of any of these open-source companies, or you’ve used them in your day-to-day work.

There’s a big gap in open-source research tooling. If you look at areas like: conducting research, surveying users, analyzing transcripts—there’s not a lot of maturity for enabling these uses cases usign open-source, especially relative to the huge amount of innovation happening amongst VC-backed startups.

But from a sustainability standpoint, there’s nothing more robust than an open-source platform. Even if the maintainers of these platforms were to close up shop and walk away, you could continue to run your own instance, and keep adding features yourself forever.

The global market for SaaS is estimated at around $3 trillion. Imagine if just a fraction of these resources were instead directed to open-source products, which could benefit everyone.

Open-source research tools are a big enough idea that it could be its own talk.

Beyond the “ops”

ResearchOps is service design

Bringing it back to the role that ResearchOps can play in integrating insights into product development, Kate Towsey describes ReOps as a lot less like administration and a lot more like service design.

I think we can extend the progression from service design to service delivery into the future, and see that most operations—once they’ve been delivered consistently for long enough—actually start to become tools, platforms, or at least repeatable workflows.

In the field of DevOps, for example, professionals have created and are supported by extremely sophisticated tooling, like what we see in the observability and the continuous integration space. So DevOps practitioners are typically working on higher-order problems than administration.

ResearchOps is designing dark matter

I would add to the idea of service design that ResearchOps is also designing “dark matter” to quote the strategic and systems designer, Dan Hill:

Dark matter describes the imperceptible, yet fundamental, facets of design. Organizational cultures, regulatory environments, business models, ideologies.

Hill proposes that architects could actually have a greater impact on the city by spending less time designing buildings and more time designing building codes.

Dark Matter and Trojan Horses by Dan Hill

ResearchOps is a platform team

Looking at these definitions of ResearchOps as:

  • service design,
  • strategic design,
  • and systems design,

gives a little bit of context for a shift that my team went through last year, from describing ourselves as an operations team to describing ourselves as a product team, and specifically, one that builds platforms.

Gergely Orosz writes:

Being a platform team means owning the building blocks that program teams use to ship business impact. Programs are built on top of platforms, and platforms enable teams to move faster.

We’re not just working for research, but actually working to get research integrated on a systems level into other disciplines: product management, design, development, and across leadership.

New directions towards integrating UX research

I’ll wrap up by sharing a few directions on how I’m thinking about integrating research.

While our team started off talking a lot about “atomic research”, we ended up coming up with our own name for a new initiative, which we call “research highlights.”

We decided to focus on where we have the most leverage as a platform team, which is in the later stages of the research journey. Not determining how research is done, but informing how it’s communicated, integrated, and measured.

We did some research around user needs of three personas:

  • researchers,
  • product managers,
  • and executives.

And, we uncovered important nuances in what each of these users needs out of research.

Because, our team had already had success in building a research library that’s in use across the organization, we identified 4 opportunities to evolve that into a new kind of insights hub:

1. Actionable → structured content

Smaller, more specific units of research can be surfaced, searched, and linked to. This is important for impact tracking, because connecting a product roadmap item to an entire 100-page research report doesn’t tell you anything in particular about the what, the how, or the why of research utilization.

In our research library, we’ve introduced another level of content in addition to the project, which is the research highlight.

Keep in mind that this is post-synthesis. So, we’re really just surfacing findings, insights, and recommendations that researchers have already created. But, we’re providing guidelines, kind of like the ones you see below by Etienne Fang’s team on how to communicate more effectively.

And, we’re making it possible to pull these small units up out of the deck, while making sure that the decks are still available in all of their richness.

2. Trackable → connected to product roadmaps

Given a huge product-led growth transformation that our partners in product management are undertaking, it became clear that it would be a strategic time both for product management and user research to be able to really specifically visualize the impact that research is having on product roadmaps.

At the same time, we were observing that research managers were measuring impact in many non-standardized ways. For researchers, tracking impact is often times a manual process on a page of their slide deck to map the relationships between development issues, roadmap items, and research recommendations. And it’s time-consuming to keep this up to date and check back to actually ensure that research is getting implemented.

A key insight from our interviews with researchers is that successful research impact often looks like invisible influence. Through a convincing UXR presentation a research recommendation gets planted in the mind of a product manager. A PM might come out of a meeting saying, “Hey, I’ve got this great idea!”

We wanted to build a system that helps product managers make links back to specific UX research insights. For PMs, this helps demonstrate that they’re not just caving to HIPPOs (Highest Paid Person’s Opinion). Product decisions, when they’re linked to UX research, can be backed up by a reference to methodically validated data.

3. Measurable → embedded with biz and UX metrics

Another key insight from our research was that research impact only becomes visible in time.

Even an actionable recommendation takes time to implement, and business KPIs like revenue or satisfaction are lagging. So it can feel like too many steps removed to make a confident claim that research caused specific changes.

What if when we’re looking at increasing or declining satisfaction or revenue, we could put that in the context of key milestones in the collaboration between research and product—to show what might be driving these changes?

The standard way of looking at impact is big numbers. Maybe with a few charts thrown in if you’re feeling fancy. Daniel Schmidt describes this as a “flat dashboard” because you’re not looking at metrics in the context of…anything.

For any dashboard, the question that I try to ask is: What decisions is this helping you make?

On the right, if you were to rethink this dashboard as a system of inputs and outputs, leverage points, feedback loops, and to dis-ambiguate your leading metrics, the user behaviors that you can impact through design, from your lagging metrics like satisfaction, you can start to draw connections.

In the system dashboard (below right), it starts to look pretty clear to me what the measure of success is for each of these initiatives. Their impact in the context of a north star metric. And, also what the intended effect is on the business KPIs.

4. Composable → shared and re-used company wide

Finally, a direction that we’re really building towards is extending the reach and the impact of research.

This means making sure that research is shareable and reusable. In the “quadrant of doom chart” by Matt Duignan (below), we see a little dopamine molecule in the bottom-left quadrant. This is the reward that you feel when you bring something useful to your collaborators, and they immediately thank you for your work.

The reason that the top-right is labeled the “quadrant of doom” is that when we put research into a repository, if we do it right it’s going to help tons of people across the company, long into the future. But as a researcher you might never meet these people, they might never reach out and thank you for your work.

So the real promise of a research platform is to shift an organization from a siloed, short-term way of thinking about research impact as something that happens within a specific project or team this quarter, to thinking about research as a contribution to a company’s collective future knowledge and decision-making.

Duignan’s team has also done some really interesting work to visualize research as a network graph. Coming back to that fundamental problem of research waste, you can start to see where there’s heat on the map and navigate neighborhoods of research. You can make cross-references to triangulate information that validates or disproves a past thesis.

So, there’s a lot more work to be done in this area.

Scientific communication

If you’ve ever heard the saying, “A better pen doesn’t make you a better writer,” I think this points to where we get lost in conversations about research tooling. It’s obviously true that a good tool can’t make impact out of weak research. But, a really bad pen can make you unable to record your ideas clearly.

Images: The Cognitive Style of PowerPoint by Edward Tufte / Columbia STS-107 crew members, NASA

Information designer Edward Tufte has made a controversial argument that PowerPoint is to blame for multiple space shuttle crashes because of a limitation in what he calls “the cognitive style of PowerPoint”.

In the annotated PowerPoint slide (above), you can see where significant launch risks are mentioned several times.

But is the gravity really made clear?

I can totally imagine being the busy person who glazes over this information and doesn’t catch the warning. I experience firsthand the lack of a rigorous way to track next steps or prioritize what’s most important and to make sure that recommendations turn into action.

(There have been some counterarguments to Tufte, suggesting that facts were not a significant driver of the decision to launch or not launch, and that evidence was ignored due to management pressure.

But, that is a whole other talk on research and power.)

Thinking back to Hill’s definition of dark matter:

it’s not the pen or the platform that matters, it’s the social life of research.

How is research getting communicated? Instrumented? Is it understood? Does it have influence?

Researchers and ResearchOps practitioners are in the right place at the right time to have a lot of leverage on our industry and push towards more open, interoperable, and impactful futures.

So maybe you—with the understanding that research is only as successful as the change it creates—will start your own platform, and begin by designing an API for research, rather than just another siloed interface.

Powers of Ten by Charles and Ray Eames

Thanks so much for taking the time to read this talk transcript.

I will leave you with a few quick resources and takeaways:

  1. Download the Open Research Platforms Evaluation Cheatsheet and please share back your experience of trying to integrate platforms!
  2. Join the Research Ops Community Slack and check out the #tools channel, where we have an ongoing conversation about topics like these.

If this has sparked anything for you or you’ve got follow-up questions — please connect! I’m always looking forward to continuing the conversation:

Special thanks to the UX360 team for organizing a great event and the invitation to keep elaborating on these ideas and my teammates at IBM who have helped shape these directions.

More on open research platforms:

Casey Gollan is a Manager of UX Research Insights and Engagement @ IBM. The above article is personal and does not necessarily represent IBM’s positions, strategies, or opinions.