Boost your experimentation velocity by beating organizational inertia

Giannis Psaroudakis
Product Experimenters
5 min readJul 31, 2018

As a product manager at Optimizely, I’ve seen many organizations enthusiastically embark on their experimentation journey: kicking off their first tests and scoring early wins. But only a few keep up the momentum and are successful at creating a thriving experiment-first culture. Most get stuck at good enough — they are not able to expand their testing program beyond the handful of early adopters that embraced it in the first place.

Beating organizational inertia, the tendency that companies have to continue on an existing trajectory and be fearful of changes to the way they work, is the biggest hurdle that holds teams back from embracing new processes — even ones as transformative as experimentation. That’s according to an expert who has been very successful at scaling experimentation in his organization to hundreds of test each month. When I asked for his “secret”, he gave me a very enlightening answer:

“We make it our goal to empower as many teams as possible to run their first experiment, then let adoption grow organically from there.”

This sounds great in theory, but in practice it takes a lot of planning and well-oiled execution to overcome this institutional barrier. In this post, I wanted to provide a few best practices I’ve learned while at Optimizely — and prior to that, Microsoft — that you can employ in your organization as you are thinking of ways to grow the adoption of your program.

1) Identify a north star

It’s important to clearly articulate the objective of your program and map it to your company’s objective. This is commonly known as the Overall Evaluation Criterion (OEC). Create a common understanding across your organization of what you’re optimizing for and how you measure success. While you’re thinking about this, read the perils of experimenting with the wrong metrics to avoid some common mistakes.

2) Map your journey with program adoption targets

Peter Drucker’s famous mantra, “you can’t improve what you can’t measure” applies in the context of increasing your program’s adoption. Industry veterans with an experiment-first mindset are known for measuring the adoption of experimentation and setting targets around it. At Optimizely, we have team-level and org-level targets (in the form of OKRs) for Experimentation Velocity. We set targets for the “number of experiments run per quarter”. Other organizations measure adoption based on the “percentage of teams that have run at least 1 test each period.”

3) Help trailblazers shine

Every successful experimentation program starts with a few people who see the value of experimentation before the rest (at Optimizely, we like to call them experimentation heroes). They’re willing to invest time and energy to solve their problems beyond the processes and tools available to them. These early adopters aren’t just the first to experiment, they’re the vocal supporters who will influence the rest of your organization. You’ll need to patiently identify them and cater to their early needs, as they give your program the propulsion it needs to reach the majority of your teams — very much like a new product that strives to become a mainstream success, as famously illustrated in Geoffrey Moore’s Crossing the Chasm (find here a 4-min summary of his work).

4) Smooth the path for the rest of the pack

Once your early adopters are “bought-in,” your next and perhaps biggest challenge is to persuade the majority. Unlike early adopters, the majority is more pragmatic and doesn’t always have the patience to try new things. They expect their first experience with a new process to be frictionless. Your goal should be to apply the lessons from your early adopters and eliminate barriers. The rule of thumb I’d use is to enable a team to “set up and run their first experiment in less than 10 minutes.” Even if that experiment tests a very simple change, the delight of seeing the first results page without encountering surprises will keep them motivated to experiment more. At Optimizely, one metric we use to benchmark the ease of use of our product is customers’ “time to first experiment” – from the time they access our app for the first time till the time they launch their first test. Use this metric or similar to benchmark the ease of your process.

5) Let experiment results tell you how far you’ve come

Rely on experiment results to track your teams’ progress against quarterly/semi-annual/annual goals. When I was at Microsoft Bing, most of our teams quantified the status against their committed targets through experiment scorecards. For instance, our performance team kept a detailed balance sheet of the milliseconds shaved off page load time. We also measured time added to the page (usually by other teams) as part of our A/B testing results. Measuring experiment results encouraged the team to rely exclusively on experiments to quantify gains/losses. It was also essential for setting “performance budgets” for other teams that wanted to ship valuable changes, which negatively impacted page load time. The same practice was followed by other teams, like the Relevance and Ads teams; ultimately, it was a key reason why these teams were among the most prolific experimentation teams in the organization. On a related note, here’s an article on how experimentation transformed Bing, from Harvard Business Review.

6) Celebrate the lesson you learned and find patterns you can use for your next adventure

As you are iterating your program to perfection, remember to always celebrate the lessons – good or bad – from your journey. At Bing, we celebrated “failed” hypotheses just as we would successful ones. We also dedicated a good amount of time internally or in public forums to discuss unforeseen results from our experiments. We also found ways to see the positive side of bad outcomes. When one A/B test slowed down the load time of a specific asset on our search page, we didn’t “punish” the team behind it. Rather, we saw it as an opportunity to study how slowing down different sections of our page influenced our user engagement metrics. The study formed the basis of newfound work to optimize perceived performance and the team was widely acknowledged for their effort.

In Conclusion

These are just a few of the best practices you can employ to beat inertia and boost your experimentation velocity by including most of your teams in the experimentation process. Do you follow other best practices that are missing from the list and want to share? Drop us a note in the comments section to keep the conversation going!

--

--