Optimising the iPlayer Experience with A/B testing

Andy Smith
BBC Product & Technology
6 min readJun 1, 2018

We always build iPlayer for the audience! We aim to optimise iPlayer to make it as easy as possible for our users to be able to find what they want to watch and to help them discover new content. The BBC’s motto is “inform, educate and entertain” after all!

To ensure everything we’re doing is helping us towards this aim, over the past year or so, we’ve had a culture change. We used to spend a while perfecting a new feature in the hope that it will be beneficial. We now release every new feature or change as an A/B or A/B/n test. We then fine-tune it afterwards, if it has a positive impact. In this blog post, I’ll talk more about how we’re now releasing a new optimisation experiment nearly every week and the code that goes into it. I’ll also go on to talk about some of the experiments we’ve conducted!

Homepage A/B Test — old variant contains episode items and tall groups, whereas the new variant on the right has episode items in carousels allowing users to discover more content without navigating to a separate page.

Rapid experimentation

The BBC has recently started using a new tool for running experiments across the BBC’s websites and apps. This tool has an area for us to share our ideas for experiments and allows us all to vote on them. We vote based on various criteria such as potential for impact and level of effort required to set up. We all put our ideas into the tool as and when we come up with them, and then each week we vote on any new ideas that have been added. This voting technique makes it easy for us to then work out which experiment to do next, normally the one with the highest score that’s still relatively quick and easy to do.

Each week, our product team (including developers, testers, designers, product owners and project managers) will meet up for around half an hour to an hour to flesh out exactly what work needs to be done.

The following week, one pair of developers will pick up this work and aim to get it released within a day or two!

To use a cliché, it’s as simple as 123:

  1. Generate ideas (on an ad-hoc basis)
  2. Vote on ideas (at least once a week)
  3. Pick top easy idea and do it!

The crucial thing here is that for our weekly experiments, we’re doing the ideas that are easy and cheap to develop. That doesn’t mean we forget about the harder ideas — these need a little more work and so have a longer lead time. That could mean doing some research into technical approach or UX, or talking to other teams in the BBC on which we depend on for data. These so-called harder ideas are then prioritised and put on our product roadmap.

How do we do it?

The tool allows us to configure the following for our experiments:

  • ID
  • Description
  • Percentage of the traffic to allocate into the experiment
  • An option of making an experiment mutually exclusive from one or more others
  • IDs and descriptions of variations, and a percentage of audience to allocate to each
  • Optional audience filtering, for example based on user’s screen size
  • Metrics to track

Once the experiment has been setup in the tool, we can then write some code!

The tool comes with SDKs for various programming languages, including NodeJS, which we use in iPlayer. To activate a user, we simply make a call to the SDK’s .activate() function, passing the identifier for the experiment and a unique ID for the user. This function works out which variant the user should get shown, and returns its identifier. Our code then has simple conditional logic to work out what variant to show. For example:

const variant = abTool.activate(experimentId, userId);if (variant === 'carousels') {
renderCarouselsHomepage();
} else {
renderDefaultHomepage();
}

If we setup audience filtering for this experiment, we can also pass user attributes to this function, such as the user’s device size, so that only the right audience members are activated.

Now the user is activated, we need to send events when they perform certain behaviours, such as clicking the play button. This is so that we can see if the variant they’re seeing is having a positive or negative impact on our metrics.

We use the tool’s SDK for this too, with a simple call to the .track() method, with the name of the metric, the unique ID for the user and if applicable again, the user’s attributes. For example:

function sendPlayEvent() {
abTool.track('play-click', userId, userAttributes);
}
playButton.addEventListener('click', sendPlayEvent);

At this point, we’re now ready to confirm everything has been setup correctly and then turn on the experiment!

It’s really that easy — drop in a call to .activate() and one or more calls to .track() and we’re ready to start seeing how our changes affect our metrics!

Recent Experiments

We’ve tested a wide variety of things in the past year, including:

  • Changing the Most Popular page to an image-focussed grid of programmes, rather than a text-heavy list
  • Removing duplication of episodes between sections on the iPlayer homepage
  • Changing the pages that shows all the episodes for a programme to an image-focussed grid of programmes, rather than a text-heavy list
  • Changing the Search results page to an image-focussed grid of programmes, rather than a text-heavy list
  • Adding performance improvements including reordering HTML elements in the <head> tag to change the order in which assets (such as styles and scripts) get loaded and adding a link header to suggest to browsers to preload some assets
  • Switching on the BBC’s new corporate font — BBC Reith
  • Reducing the height of the header on the Most Popular page so that more items are visible in the viewport
  • Showing content in sections on the iPlayer homepage in a slider/carousel, rather than a grid view, meaning we surface more programme options to users without them having to navigate to another page

As you can see, the ideas range from minor design tweaks to larger design changes, and from new features to technical changes. As long as the idea has the possibility of having an impact, it’s a valid idea.

At time of writing, some of these experiments were still running, but none of those that have concluded have had a negative impact on our metrics!

Learnings and Key Points to Takeaway

If you’re thinking of moving towards a culture of rapid innovation, here is a few of our learnings and key points to takeaway:

  • The whole team should have an opportunity to share and vote on ideas
  • If you’re aiming for an experiment every week, ideas should be quick, easy and cheap to develop
  • We’re lucky to have a lot of traffic to iPlayer. This means that we don’t have to run experiments for too long whilst still being able to reach statistical significance, even if the experiment has gone out to a small percentage of the audience. The great thing about this is that the ideas you’re experimenting with don’t have to be perfect, and shouldn’t be
  • The code you’re writing for the experiment doesn’t have to be perfect either — it’ll only be there for a short period of time whilst the experiment is running
  • If an experiment shows a negative impact, don’t roll the feature out and don’t get disheartened . You could tweak the idea and experiment with it again. If not, remove the code! Hopefully, you’ve got plenty more ideas to experiment with
  • Once an experiment has had a positive impact, this is the time to perfect it. This includes tidying up both the visual design and the code behind it and doing any other technical work required if applicable to make this a production-ready sustainable feature

We’ve loved running loads of experiments to improve iPlayer for our users — hopefully this blog post has shown you how easy it is for you to do the same for your users too!

--

--

Andy Smith
BBC Product & Technology

Software Engineering Manager/Principal at Nuffield Health. Previously Lead Engineer at Pret and Software Engineering Team Lead at BBC.