Measuring your Research Operations Maturity

Dave Malouf
Sep 19, 2018 · 3 min read

Recently I was asked how I measure the maturity of a Research Operations (ResOps) practice in any organization. To be honest, before being asked, I really thought that the question is a bit premature. Many who have the most experience in ResOps didn’t even know they were doing it until this past year, so they haven’t really been codifying the practice across any sort of diverse set of axes.

So in the heat of the moment this was my response:

Image for post

So then I started thinking to myself, there needs to be more to this. In my mind this is less a metrics thing and more of a heuristics thing. In my mind is the work I did while at HPE where we created a heuristic form based on the principles we developed plus some outside sources and our personal experiences. For each quality, like clarity, we would develop a set of heuristic questions with numeric scores of 1 to 4, where 1 was bad and 4 was awesome. There was no neutral on purpose.

Here’s an example:

Image for post

There would be 4 to 5questions per top level principle or category. Then we would average the scores across all questions and then average the scores across all categories to come up with a total score.

We also had a vision, but didn’t know how to make it work, to create a spider graph that would take each category score and that would be that leg of the web’s plot point like this:

Image for post

This gives the user an easy way to register how “full” their measures are as well as comparison in each category.

So again, the list of items with a bit of an explanation for each:

  • Inclusion:
    - Who is included in all the stages of research?
    - This means the internal team and types of research subjects.
    - Is the full spectrum of people involved in the problem space included?
    - Data comes from many sources in the organization.
  • Diversity:
    - What processes are in place to ensure diverse viewpoints are included in all stages of research?
    - What is the current state of diversity of both internal teams and representations of data collected?
    - Diverse data types are included in the process.
  • Empathy:
    - How much is empathy spread through the organization for customers & users of products and services?
    - Who in the organization can share stories of customers & users that can express their emotional and cognitive mental models?
  • Holism:
    - Is research done in a holistic manner?
    - Is the360-degree journey of customers widely well understood?
  • Synthesis:
    - Is collected data aggregated, and synthesized into models, prototypes, and visions?
  • Rigor:
    - Is data gathered in ways to keep data clean and to avoid wrongful conclusions?

I’m sure there are more questions for all the above and even some other qualities we can add to the mix. I have some that I’m inspiring myself with, but I’d love to hear from others.

Amplify Design

Amplifying your design team’s value through design…

Dave Malouf

Written by

Dave Malouf is a design leader who helps teams provide the greatest value to their customers and host organizations.

Amplify Design

Amplifying your design team’s value through design operations (DesOps)

Dave Malouf

Written by

Dave Malouf is a design leader who helps teams provide the greatest value to their customers and host organizations.

Amplify Design

Amplifying your design team’s value through design operations (DesOps)

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch

Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore

Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store