How to Get Better at A/B Testing and Optimization

Recently I joined the Optimization Podcast by Sentient, hosted by Paul Jarratt.

I included some of my favorite answers transcribed below as well. Thanks again to the Sentient team for featuring me!

PJ: So tell me a little bit about your testing experience at Showmax. What are you going to test and how do you go about testing? What are the reasons you test?

BE: We’ve really focused on building out a culture of A/B Testing. We think that the best way to validate ideas from the product perspective and from the technology perspective is to get things in front of customers quickly.

That gives us a sense whether customers using a new idea or a new feature. And the data tells us if a customer is actually enjoying the experience. We combine that with actual customer research — talking to customers and getting insights — but the best way to get an actual read is by putting features in front of users and seeing what they do with them.

What we’ve really focused on with Showmax is how do we build a platform that makes it easy to put a lot of tests into market and and get quick reads. We’ve built our own internal A/B testing platform at the company. We have our own landing page tool system to enable us to test a lot of different landing page combinations.

We’re looking for a variety of things. Are new features working? Can we test those? Then we’re also looking at how do we optimize our conversion and what are our other opportunities?

PJ: I know you’ve got a long history of testing across lots of different companies. What are some of the best practices you’ve seen to date or that you would like optimization teams to follow? What have you learned along the way that seem to ring true and do well?

BE: One of the big ones for me is getting to a place where you have a lot of testing velocity. If you only can afford one to two tests a month, I argue that the best thing you could do would not be to run those one to two tests, but would be instead to spend the time to increase the velocity to be able to double and triple the number of tests you can run on a regular basis. Putting the investment into the tools or the infrastructure can be really critical because your chances of success with a small number of tests is really low.

Another key is pushing your teams to have a hypothesis behind why they’re running each test. It’s maybe not as important for things like testing button colors. But if the team can’t come up with a customer based reason that a test makes sense, that always gives me pause because it makes me question why are we running the test? Why are we wasting the resources to run these tests? Could we not better allocate our product and engineering talent towards things that we have a good sense are going to matter for customers?

That’s not me arguing that every test deserves lengthy design and user research. It’s more that I want the team to be able to think from a customer perspective or from a business perspective about why this test would matter versus just coming up with a variety of random ideas that they want to test just because they can. I feel like that that leads to better outcomes.

The last one is what is the metric that you’re trying to move? But also, what are the downstream metrics that you should see move from a follow-on effect? A lot of times you move the very top of the funnel metric and you get more people clicking through step one or giving you an email address.

But then when you actually look at the total impact of the test, it’s negligible on your core business metric, such as revenue or subscribers. I think one of the big mistakes that a lot of people make is to forget to measure the downstream impact of their test.

Thus the importance of full funnel metrics. Some teams only look at the top line metrics and they declare tests a victory or success. Then six months later everyone’s wondering why did the business results not improve even though we got all these successful tests. It’s because people didn’t properly look at what the full effect of the tests were.

PJ: Have you had any surprises where you run a test and you think wow, I really didn’t expect that result? What are some takeaways from those experiences that you could share with others?

I used to assume that a well curated list of options would be something that customers would prefer if they felt it was handpicked for them. And one of the big learnings I’ve had is that in many cases, handpicking maybe the first two or three things in a list or in a email is great, but the more options the better. A good example was One Kings Lane. This example is pretty simple, but interesting. We started by having twelve sales shown in our emails. In an early test, we added four more units for a total of sixteen. We saw our click-through rates and our conversion rates increase.

We went to twenty-four. We saw our email rates and our conversion rates increase. We end up moving all the way up to thirty-six until we maxed out conversion through that method. I think the long tail of options means you have a better chance of capturing your customer. If we didn’t pick really well within the first three or four different options you showed, the customer wasn’t going to click.

I’ve had similar learnings with content on Showmax. We’ve been able to consistently add more content to our homepages and the more content we add the more people click on it. It’s not rocket science. But I think it’s human instinct to think we can show less content that’s well-curated and people will click on it. The reality is that people’s tastes are so different, it’s hard for that to work when you can solve the problem with personalization and more choice.

Another learning: don’t be afraid to test because you don’t love the aesthetic of a design.

Don’t be afraid to try different designs, even ones that are less “pretty.” I tend to be very conversion focused. And a lot of designs I thought looked good ended up losing for a variety of reasons. I think a lot of it comes down to: what’s the aesthetic of the website and who are the users who are accessing the website.

Then, you need to think about what types of conversion funnels are going to work, what types of buttons and CTAs are going to work, and what types of experiences are going to work. I used to be a big proponent of the less steps the better, but I’ve found in different experiences that sometimes more steps with less information where you feel more locked in each step actually converted better. But then you try it at different company, and it doesn’t work.

So I think the real takeaway is that there’s not a single formula. In every single business you have to test and iterate your way to success. The key is having a mindset that enables that and then be willing to test your way into the best thing for the company you’re working on at the time you’re working there.

Product and Growth - follow me at http://www.barronernst.com. Bball, running, poker, skiing, cal sports are fav. hobbies.