Enabling collaboration with standards for describing research across teams and disciplines — to get more done with insights

Integrating research > Forging systemic knowledge >D. Improving upon existing research knowledge management practices > Article D1

How should I think of this piece of research? How does it relate to other inputs that we could build our product plans around?

Insight consumers can become confused when different insight generators, each vying for attention and impact, talk about their work in isolation. Researchers of different stripes can clarify what they offer — and start to build a more connective community — by defining common ways of describing each type of research. Communities can come together to stand on a foundation of shared language.

When great research teams deliver results, they provide context on the study and its methods in a way that educates product people and frames the insights. In some cases this framing receives more attention — some insight generators feel like they have more explaining to do. In other cases, descriptive framing is an afterthought, all but left out due to tight deadlines.

Even at their best, study descriptions can usually be improved. While typical descriptions work within a single report or team, they do not help product people to understand the bigger picture across the research that’s available to them. When comparing research from two different sources, insight consumers must take on the interpretive labor to build a point of view on how each source relates. This point of view is often shaped by biases and different levels of research literacy.

Researchers within an organization can workshop and define standard ways of describing their varying outputs — a sort of ‘nutrition facts’ that can aid in the comprehension of each study or individual insight.

As someone who steps in and takes stock of past research within organizations (C1), I’ve seen a wide range of research descriptions, ranging from non-existent to overlong texts. There’s often an implicit assumption by report authors that research consumers have some basic understanding of what’s being offered to them.

Standard ‘description blocks’ for research can answer the ‘5 whys’ in a digestible format, drawing attention to factors that are crucial to comparing research outputs. Research communities can define some of the elements as closed lists to drive consistency (e.g. research team) and leave some elements open to allow for diverse, project specific values (e.g. key topics). These descriptions can become linked gateways to more information, allowing insight consumers to drill in for more description, including strengths, weaknesses, and caveats.

Similar to referencing each others’ evidence in new reports (C2), defining standard metadata can be another touchpoint to grow mutual respect and collaboration among otherwise distant researchers. And as with sharing research road maps (B3), arriving at shared ‘nutrition facts’ requires researchers to step up out of their teams and silos — to actually learn the specifics of what other insight generators are doing. Additionally, shared standards present an opportunity for a research community to draw attention to aspects of their outputs that they want to emphasize, such as gold-standard methods or key caveats.

As dry as these references may seem, I’ve found that once people see descriptive standards put to use, the growth in research literacy is a clear win. And by growing understanding of the bigger tapestry of research variety, product and design leaders can make better decisions and think through which types of research they want to see more of.

Descriptive standards are also a useful step toward any effort to aggregate or hybridize research — such as research repositories. A register of research suddenly becomes much easier to scan when reports are characterized in common ways. And since every insight or other object within a report can hierarchically inherit the report’s description, this type of standardization can also be a stepping stone to assembling insights in an ‘insight hub’ or other granular repository.

Improving your insight operations

Get more done with your research community’s insights by:

  • Gathering existing research description and assembling a working group
    Seek out examples of internal language for describing research, and locate any relevant controlled vocabularies that you can build from. Include sources that product people consider research, rather than limiting scope to specific research disciplines. In the process of assessing the current state, identify interested insight generators and insights seekers that can contribute to your standards effort.
  • Identifying common and infrequent research descriptions
    When gathering descriptive metadata from different teams, some standards will emerge as relatively straight forward, with easy alignment to how varying researchers talk about their work (e.g. date conducted). However, researchers in some disciplines may have a range of ‘in the weeds’ descriptors that may be harder to process. A quick usability study with insight seekers can help shift perspectives on what’s actually primary description and what can be included as secondary, free form context. It’s also worth noting that, if possible, terminology used in research tools should not dictate your standards.
  • Pushing further to define research inter-relations and call out key distinctions
    Define standards for consistent study titles (or at least a list of ‘don’ts’). Explore summarizing more of the method section and the ‘why’ of research into standard descriptions. Generate new descriptive content to call out key distinctions among different types of research, and to explain research methods, strengths, and caveats. Add links to education content where insight seekers can, from any research artifact, grow their understanding of the insights they are looking at, as well as how those insights fit into the bigger picture of research within your organization (method categorization, glossary of research terms, etc.).
  • Cutting down and standardizing to ensure description is legible ‘at a glance’
    Over-long description without corresponding value will increase friction on the authoring side and create unnecessary work for research consumers. Look for opportunities to combine fields. Set aside items that are more about findability than study description. Seek plain language alternatives. Rank and organize fields into logical groupings. Standardize ‘closed’ values where appropriate to simplify authoring and consumption. Push ‘open’ descriptive fields to the end of your ‘nutrition facts’ block. Again, internal prototype studies with insight seekers can identify areas to improve.
  • Iterating and operationalizing your new standard
    As with any set of standards, this is not a ‘one and done’ process. Pilot small and iterate. To grow collective identity across contributing researchers, name your new standard and seek leadership buy-in for the work that it will take to apply it broadly. Develop a process — perhaps owned by staff who ensure that each research output is a meta analysis of compiled learning (C2) — to inspect whether new reports are implementing your descriptive standards effectively. Use gaps in the portfolio of research currently available in your organization to argue for changes to research roadmaps (B3) and staffing.
  • Your idea here…

On the path from insight to product impact

Enabling collaboration with standards for describing research across teams and disciplines is part of being seen as having sufficient evidence to drive product plans. It’s also related to usefully articulating insights in a way that can be understood in the larger context of available research in an organization.

Let’s connect

If you’ve read this far, please don’t be a stranger. I’m curious to hear about your challenges and successes defining standards for describing different kinds of research in your organization. Thank you!

LinkedIn | Medium | New article emails | Twitter

Related posts

Selected references

  • “Our goal to give researchers the best chance to share knowledge with peers and non-researchers within their organisation by creating a UX Research vocabulary that:
    - is logical
    - is memorable
    - is easy to adapt one’s workflow to
    - is easy to integrate with existing repositories and platforms
    - is powerful enough to be used for data mining in the future.
    - extends as many existing standards as possible
    - is supported by industry, yet remains independent
    - is open source” Paul Kotz
    https://medium.com/researchops-community/little-dictionaries-making-an-open-source-taxonomy-for-ux-research-repositories-5fe015a6d493
  • “Consider if we even had just a few common tags:
    - Fiscal year (standardized list)
    - Department name (standardized list)
    - Scope (standardized list e.g. website, application, program, service)
    - Topic (unstructured)
    - Project title (unstructured)
    - Project description (unstructured)
    - Type of research (standardized list e.g. usability testing, feedback)
    - Research question or task (unstructured) — one entry per question or task

    A basic set of tags like this, with both standardized and unstructured elements, would enable merging all of the data into a central pool..” spydergrrl
    https://www.spydergrrl.com/2021/04/swimming-in-ux-data-lake-using-machine.html
  • “Ultimately, we landed on the following metadata for each research project:
    - Title of project
    - Start date
    - Researcher(s)
    - Status (planning, execution, analysis, reporting, completed)
    - Related department(s)
    - Tags (for type of research, e.g. user interviews, usability testing)
    - Products (e.g. main website, Weaver Library)
    - Keywords
    - User type” Rebecca Blakiston
    https://medium.com/ux-ed/wheres-that-data-again-fcd3a533427b

--

--

Articles on research repositories and better integrating streams of research into product planning. Tech organizations are acting like labs without collective notebooks, unlocking only limited value from their research investments. Let’s get more done with research insights.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Jake Burghardt

Focused on integrating streams of customer-centered research and data analyses into product operations, plans, and designs. www.linkedin.com/in/jakeburghardt