The Optimization Rut: How to Know When to Move On

Jason van der Merwe
4 min readJun 20, 2023

--

Sometimes moving important metrics feels so easy — every A/B test is a huge win. And sometimes, no matter how many ideas you try, you cannot improve the target metric. But you keep trying, believing that there must be a winning variant out there somewhere. You might be right, or you might be stuck in the optimization rut.

In a previous post, I’ve stated that Growth Teams should behave like detectives. They should be looking for opportunities where extracting more value from the user experience can result in impressive gains in metrics like retention or new user growth. But sometimes we get stuck spinning our wheels. This happens when our research, data or intuition tells us there’s an opportunity, but every attempt to move the needle offers little to no return on investment. It’s human nature to want to double down and not give up. But sometimes moving on is the prudent move to make.

In this post, I’m joined by Rip Sanghani, Strava’s Group Product Manager for the Subscriptions space. We’re going to list some tips on how to know when it’s time to move on and focus your team’s attention elsewhere.

Moving on can feel like a break up, but sometimes you just need to do it!

Sign #1 — Both small and large experiments don’t show positive results

When you’re trying to improve a specific metric you should be A/B testing a mix of small and large changes. Trying a wide range of ideas allows you to test users’ sensitivity to a given experience while also giving you a better shot at finding the optimal experience. Sometimes the smallest alterations, like copy changes, can have a huge impact. But sometimes you need to completely change an experience in order to move a metric. For instance, let’s say you’re optimizing for a metric like push notification opt-in rate. You might try a few smaller experiments that change the design and copy of the opt-in screen. Then you might try a really large change, like showing a video to users explaining why your push notifications are so amazing. Trying both small and large experiments allows you to cover more ground.

If you have run many experiments of varying sizes that test very different experiences and none of them are showing positive results, you might already be close to the optimal user experience. Even if you aren’t, you might be close enough that the additional effort and time required to improve the metric is futile and costly.

Think of experiment sizes not by the complexity of building them, but by the relative change to the user experience. For instance, a copy change in an email is a small experiment. However, removing every onboarding email is a large experiment, even if creating and shipping the test was low effort.

Sign #2 — You’re running out of compelling ideas

Sometimes after a series of experiments, the team just loses interest or steam in a certain area. Shipping many A/B tests without seeing a win is demoralizing. If the team can’t generate or brainstorm creative new ideas, then it might be time to move on. This doesn’t mean that there isn’t more juice in the proverbial lemon. But a team with high morale and belief in an area is going to work faster with more creativity and collaboration. It might be wise to move on to a different project area and then come back after some time away. After all, absence makes the heart grow fonder!

Sign #3 — You’re not learning

Lastly, but perhaps most importantly, you should be looking at your rate of learning to determine whether it’s time to move on. A/B testing should always be in service of learning more about your user and the experiences that help them find value in your product. Every A/B test should have some learning that can be applied in other situations.

There are two reasons you might not be learning anymore in your tests. The first is simply that your experiments are coming back with no changes to metrics. That isn’t to say that there aren’t any learnings when you run a neutral test (the absence of change can be a learning itself). But if you keep running tests that have no results, in either direction, then you probably aren’t making drastic enough changes to the user experience. You’re testing changes that aren’t that important and thus will not help you learn.

Secondly, you might be running tests without clear hypotheses and goals. There are two ways this can show up. The first is running tests where there are too many variables changing so it’s difficult to identify which changes actually matter. Experiments need a precise hypothesis with changes to the user experience that are focused on proving or disproving that hypothesis. For example, if you are trying to increase notification opt in rate, and you change the copy on opt in screen, but you also shorten the length of onboarding, you aren’t going to get a clear signal as to which change actually mattered.

Additionally, you might be simply running experiments for the sake of it. There isn’t some magic pot of gold you win after you’ve run 100 experiments. The best experiments have the most novel and widely applicable learnings, so you should be seeking to structure your experiments in such a way that you can use the outcome to influence broader changes in your product. For example, at Strava we try to test different copy variations that can be applied to many different user experiences. So a learning from a push notification copy test might be ported over to somewhere else in the app.

Conclusion

Sometimes resilience is important, but sometimes resilience is actually just stubbornness. We hope these three signs will help you make better decisions around where you and your team spend your time. Don’t get stuck in the optimization rut!

--

--

Jason van der Merwe

Director of Growth Engineering @ Strava, born in South Africa, runner/cyclist depending on the year, global soccer fan.