Won’t Tools with AI Solve Research Waste?
Tenets for applying AI to research systems (excerpt from forthcoming “Stop Wasting Research” book from Rosenfeld Media)
Knowledge management (KM) tools won’t solve research waste on their own.
While AI options can boost productivity in component tasks, there’s no “push button” technology that’s going to make the problem of research waste go away.[1]
Research repositories can be essential enablers of impact — but only if they’re deeply adopted into a range of different types of work.
Research influence on product planning is a highly social practice that’s enabled by technology, not a strictly technological process.
When appropriate, adding AI-based features to research tooling can valuably support operations like transcription, translation, outlier detection, theming, mapping patterns, standardizing information formats, and more. AI in defined use cases can be a complement for researchers’ smarts, sometimes saving extensive manual efforts.
As you consider applying AI options, these tenets can inform your decisions:
Security, privacy, ethics, and governance first
Start by asking “should we” before moving on to “are we allowed?” Seek out diverse perspectives and inputs.[2]
Enlist technology to support tasks, not to outsource understanding
Define specific sub-outcomes to offload on the way to larger outcomes, rather than running an AI system to see a definitive “answer.” AI automation can locate and transform content, but only people can understand it. Time spent in research evidence can lead to more valuable and differentiated insights and product plans.
The more domain knowledge required, the more human involvement
Creating, maintaining, and connecting actionable research insights requires a rich understanding of your customers, your products, and your organization . Great insights can sit at the intersection of factors that are hard to offload to general automation: related prior learning, documented frameworks, niche industry standards and terms, the features of your product or service itself, and the specific “so what” for possible internal owners. As the variety of domain knowledge needed to accomplish a task increases, researchers should be more invested and “in the loop” of the work.
The higher importance and risk, the more human involvement
Ambiguous, “fuzzy” AI automation should not run “hands free” in research processes when the results are critical for the customer or business. As potential criticality rises, researchers should be more invested and “in the loop” of the work.
The more human interpretation, the less value in automated reinterpretation
Don’t overwrite researchers’ prior efforts. Instead, explore ways to discover, navigate, amend, and represent previously authored content. For example, AI shouldn’t summarize an insight title that researchers have refined and built consensus on — but AI-based features can build further connections out from that starting point.[3]
Stop Wasting Research book
coming in 2025 from Rosenfeld Media
Sign up for monthly newsletter
of fresh ideas about how to maximize product value
from your organization’s customer insights
[1] Stan Garfield, “15 Issues in Knowledge Management,” LinkedIn Article, October 9, 2017, www.linkedin.com/pulse/15-issues-knowledge-management-stan-garfield/
[2] For a summary of common issues with generative AI, including critical “hallucination” errors and harm through hidden biases, see: Kathy Baxter and Yoav Schlesinger “Managing the Risks of Generative AI,” Harvard Business Review, June 6, 2023, https://hbr.org/2023/06/managing-the-risks-of-generative-ai
[3] Tripp Mickle, “Apple Plans to Disable A.I. Summaries of News Notifications,” The New York Times, January 18, 2025, www.nytimes.com/2025/01/16/technology/apple-ai-news-notifications.html