Measuring your Research Operations Maturity

Dave Malouf
Amplify Design
Published in
3 min readSep 19, 2018

--

Recently I was asked how I measure the maturity of a Research Operations (ResOps) practice in any organization. To be honest, before being asked, I really thought that the question is a bit premature. Many who have the most experience in ResOps didn’t even know they were doing it until this past year, so they haven’t really been codifying the practice across any sort of diverse set of axes.

So in the heat of the moment this was my response:

So then I started thinking to myself, there needs to be more to this. In my mind this is less a metrics thing and more of a heuristics thing. In my mind is the work I did while at HPE where we created a heuristic form based on the principles we developed plus some outside sources and our personal experiences. For each quality, like clarity, we would develop a set of heuristic questions with numeric scores of 1 to 4, where 1 was bad and 4 was awesome. There was no neutral on purpose.

Here’s an example:

There would be 4 to 5questions per top level principle or category. Then we would average the scores across all questions and then average the scores across all categories to come up with a total score.

We also had a vision, but didn’t know how to make it work, to create a spider graph that would take each category score and that would be that leg of the web’s plot point like this:

This gives the user an easy way to register how “full” their measures are as well as comparison in each category.

So again, the list of items with a bit of an explanation for each:

  • Inclusion:
    - Who is included in all the stages of research?
    - This means the internal team and types of research subjects.
    - Is the full spectrum of people involved in the problem space included?
    - Data comes from many sources in the organization.
  • Diversity:
    - What processes are in place to ensure diverse viewpoints are included in all stages of research?
    - What is the current state of diversity of both internal teams and representations of data collected?
    - Diverse data types are included in the process.
  • Empathy:
    - How much is empathy spread through the organization for customers & users of products and services?
    - Who in the organization can share stories of customers & users that can express their emotional and cognitive mental models?
  • Holism:
    - Is research done in a holistic manner?
    - Is the360-degree journey of customers widely well understood?
  • Synthesis:
    - Is collected data aggregated, and synthesized into models, prototypes, and visions?
  • Rigor:
    - Is data gathered in ways to keep data clean and to avoid wrongful conclusions?

I’m sure there are more questions for all the above and even some other qualities we can add to the mix. I have some that I’m inspiring myself with, but I’d love to hear from others.

--

--

Dave Malouf
Amplify Design

Dave Malouf is a specialist in Design Operations with over 25yrs experience designing and leading in digital services. I coach ppl and act as a thought partner.