In order to scale research across 230 scrum teams, we have built tools and instruments that help teams conduct their own high-quality user research.
In a previous article “Embedding Product Design in a Large Agile Organization”, we detailed how our “one designer per scrum team” org model pushes us towards needing more generalist design talent. We’ve embraced this, focused on the benefits and have worked to support generalist design to the extent it’s sensible and achievable. A critical example is in the sub-discipline of design research.
Previous to our agile transformation, User Research was a functional group, deployed via a shared service model and serving research needs across many product development teams. Often stretched way too thin, providing adequate, consistent, and timely research was challenging. A large proportion of research projects were focused mainly on feature-specific evaluative research that served a single, very focused initiative. There were much fewer research studies that were designed and packaged for universal consumption across the company.
Shifting the research model
With the re-structuring of our design organization in 2017, we moved away from our shared service research model. We have centralized qualitative and quantitative UX research specialists in our DesignOps team and they spend their time in two ways:
- Democratizing research: They use their deep expertise in user research to build research systems and provide consultative support services (e.g., Office Hours) so that scrum teams can conduct their own exploratory and evaluative research.
- Developing and disseminating strategic insights: These research experts conduct strategic, generative research studies (qual and quant) that a single scrum team would not take on. These studies aim to feed many teams for longer periods of time and help inform business and product decisions.
For scrum teams, the benefits of this model are:
- Autonomy: Product teams can conduct product and feature-specific research themselves (with consultative support) whenever the time is right for them.
- Acceleration: Teams can conduct rapid research and iterate quickly tapping into their subject matter expertise. There’s no handoff of knowledge, minimizing the need to create time-consuming research deliverables.
- Impact: Designers and POs leading and conducting research internalize the findings more deeply and are better able to apply that knowledge to the product design throughout execution.
For Design Researchers, the benefits include an opportunity to productize their deep experience into “research systems” and the ability to dedicate more of their time uncovering high-value, strategic insights that serve a broader audience over a longer period of time.
Our Research Systems
To help scrum designers and product owners lead and conduct research, we created the Experience Definition & Measurement Framework: A collection of knowledge, self-serve tools, and research instruments. In order for this to be successful and have meaningful impact on business outcomes, we prioritized our efforts:
- Providing access to users
- Building the right thing (Product-Market Fit)
- Building the thing right (Design Quality)
User Panels and Recruiting Channels
In our previous organizational model, up to 50% of a researcher’s time could be spent recruiting users for studies. Asking scrum designers, who are the only designer on a product team, to take this on was a major barrier to allowing scrum teams to take responsibility for their own research. In 2017, we built the Research Council: a panel of 3,000 athenahealth and epocrates users who volunteered to participate in studies. We also built a small, but impactful team of Research Coordinators who cultivate and engage the panel, and create channels for study recruiting. One such channel is a bi-monthly newsletter to panel participants listing all the available studies they can participate in.
Product-Market Fit: Determining the right thing to build:
If we are building the wrong things, it doesn’t matter how well designed they are. So, we’ve focused significant energy to build instruments that help teams surface opportunities, evaluate solutions, and validate concepts.
Surfacing opportunities: We are testing an approach called Outcome-Driven Innovation (invented by Anthony Ulwick — pioneer of Jobs-to-be-Done theory) to prioritize the unmet needs of key user roles that present the best opportunities for investment. Outcome-Driven Innovation is role-based and oriented around jobs-to-be-done. We create job maps for each user role and develop a survey instrument that presents outcome statements that users rate for importance and current satisfaction. This survey is distributed to users within and beyond the athena network. An opportunity score is generated for every outcome statement and this helps us prioritize which unmet needs we should focus on.
“Outcome-Driven Innovation® (ODI) is a strategy and innovation process that ties customer-defined metrics to the “job-to-be-done”, making innovation measurable and predictable. The process employs qualitative, quantitative, and market segmentation methods that reveal hidden opportunities for growth. ODI has an 86 percent success rate — a five-fold improvement over the industry average.” https://strategyn.com/outcome-driven-innovation-process/
Exploring solutions: Product teams tap into the Opportunity Score dashboards to find unmet users needs. They then start ideating — creating many potential solutions to address user problems. We created a Resonance Testing Toolkit to guide teams through the process of getting feedback from users and synthesizing that feedback to narrow down to a couple concepts.
Validating Concepts: It’s critical that teams know very early on if an idea has legs or if they should pivot. Learning this in the Experiment phase is ideal. To address this need, we built an unmoderated Concept Validation survey instrument that shows users a video of a concept and then answer questions that determine the perceived usefulness of the idea, the users’ intent to use it, and how well it meets their needs. This is a self-serve instrument that teams can set up via an intake form, which generates the study, and is then posted for recruiting in the Research Council newsletter.
Building the thing right: Design Quality
It’s not unusual for designers to struggle to convince non-design team members of design flaws and the consequences of those flaws. What is a genuine concern for the success of the product can come across as subjective nit-picking. With the support of our Chief Product Officer, we undertook an evaluation of 55 of our most critical workflows to quantify the quality of the experiences we were providing to users. We used this opportunity to introduce more scientific language into the design vernacular and marry it with a scoring scheme. And to ensure this was not outright rejected by the rest of the R&D organization, we incorporated product managers, engineers, and subject matter experts, into the evaluation and scoring process. (Read more about our Design Quality initiative here)
This resulted in the creation of a Heuristic Evaluation tool: using Jakob Nielsen’s 10 Heuristics for User Interface Design and a scoring legend, teams can baseline their workflows and monitor them as they make design changes. Recently we’ve also started testing out PURE as another tool to help teams decide between multiple design solutions.
Scaling user research across an agile organization can be difficult
One of the biggest issues we have run into is the desire of teams to create bespoke research studies and instruments. We’re trying to productize and it takes time to find the fit — especially when you’re supporting over 200 scrum teams. So, we spend a lot of time offering consults and gathering feedback to understand what it is that teams are trying to achieve. As we learn more about their needs, we incorporate those changes into the tools and instruments.
Scaling and democratizing research requires equal parts innovation and incremental improvement to find the right product-market fit for the tools themselves (it’s very meta!). As we start rolling this out to more of the org, we’ll report back our findings. Contact us if you’d like to chat!