BBC iPlayer has moved — did you notice?

The story of moving from a monolithic PHP app to multiple node.js apps

Andy Smith
BBC Product & Technology
7 min readOct 5, 2018

--

Over the past few years, we’ve been hard at work moving the iPlayer website to a more stable and scalable architecture, at the same time as adding new features and improving the user experience. Read on to find out more about how we have rebuilt our entire stack, changed programming language and frameworks, and added features, all at the same time.

Where the journey began

Around 3 years ago, the iPlayer website codebase was a large monolith containing PHP, JavaScript and Spectrum, BBC’s own view layer. Also, it looked like this to the users:

iPlayer homepage from September 2015. Source: Wayback Machine

Why we had to do something

The site worked perfectly fine and it was very rarely unavailable — so why did we go on this journey? There are three main reasons — let’s take a look at some of them.

First off — the codebase was huge and extremely difficult for new joiners to the development team to get their head around. We needed to find some way to split up the code into sensible chunks, so that we could more easily focus on one area of the site at a time to enable us to develop new features faster.

Secondly — the site was hosted on our internal hosting platform on physical BBC servers. This hosting was custom built and designed to efficiently serve largely static pages but only with minimum variants. It meant that we were limited to serving the same cached pages to the entire audience which used to be fine, but we were starting to move towards wanting to deliver a highly personalised experience for our users.

Thirdly — in order to release our code changes to the audience, we had to do a weekly release. This involved one of our development team packaging up the code and requesting our colleagues in the frameworks team deploy this through the environments on given dates. If we had a code change that we wanted to deploy but missed the weekly cut-off time, we had to wait another week before we could release the feature to the audience! We really wanted to move to a place whereby we could ship user value as soon as possible.

Step 1 — de-monolith and quicker deployments

Our first step on this journey was to work out a way to be able to release our code changes in a continuous delivery approach — that is a way in which we could click a button and deploy our own code changes when we were ready, to the live environment without being dependent on any other teams or a weekly release process.

At this stage, we didn’t want to completely re-build our PHP application, so we needed to create some microservices hosted in the cloud, that our PHP application could use to render certain parts of the website.

Now that we were free to move away from our internal hosting that only supported PHP and Java, we were able to choose the best programming language for the job. Conveniently, the iPlayer API team, iBL, were on a similar journey and had just done a comparison of different programming languages. They made a simple version of their API in 4 different languages — Ruby, Java, Scala and Node. Based on readability, testability, flexibility and general ease of use, they chose Node. We decided to go with their decision with the added benefit that Node is of course JavaScript, which means our full-stack engineers didn’t have to keep switching programming language context when changing between frontend and backend development.

We spent some time developing microservices that would return bits of HTML for certain parts of the iPlayer website for our main PHP application to use. This felt to the team like a massive improvement — we were suddenly able to release new designs and new features far quicker, just like this iPlayer homepage from early 2016:

iPlayer homepage from February 2016. Source: Wayback Machine
Diagram of our initial Microservices architecture

Whilst this was great, we still had an issue — we still weren’t able to serve any personalised content to our users — even though we had these microservices, all the responses were still going through our PHP application hosted on limited capacity internal servers.

Step 2 — de-monolith further and allow personalisation

To be able to serve any personalised content to our users, we needed to bypass the internal hosting for these bits of content . This meant serving the static parts of the page through our PHP application with placeholders for personalised content, and serving some static JavaScript to enable users’ browsers to load the personalised bits of content directly from our cloud-hosted microservices into those placeholders.

Our first personalised page was the Watching page. The PHP application returned the static HTML (the header, navigation and footer etc) and a loading spinner. The JavaScript would then run in the user’s browser, calling a new microservice. This microservice called BBC’s User Activity Service (via iBL) to get a list of programmes that the user had been watching, in order to return the markup to display these. This new architecture meant we could now serve personalised pages to the user, like this Watching page from the end of 2016:

iPlayer Watching page from 2016

So by now, the journey is going great — we’ve got fast delivery of new features and personalised pages, but we still had some issues:

  • These personalised pages required JavaScript to run, so would not work on devices that don’t have JavaScript or have it turned off
  • We’ve got a mix of architectures — a PHP monolith requiring a weekly release and Node microservices that we can deploy ourselves. Ideally we’d just have one way of doing things
  • The internal hosting platform was being decommissioned so we needed to move the entire site into the cloud

We now needed to take what we’d learnt with our Node microservices to migrate the rest of the iPlayer website.

Diagram of our second iteration of our microservices architecture

Step 3 — Micro-monoliths

We took our learnings from our Node microservices and started writing Node apps that could return full web pages, rather than chunks of HTML. We’re calling these micro-monoliths — we have individual apps for individual sections of the website, but each app has all the logic needed to construct an entire page (or multiple pages).

We started this process in early 2017 when we started with a lesser used part of the site as a proof of concept. A few engineers were tasked with this, so that the rest could continue with building new features for the rest of the site.

We launched our first app in early-mid 2017 — our first pages fully rendered server-side, in the cloud, using Node! We then swiftly moved on to the part of the site with most impact, and more importantly, the area which we are most likely to change going forward — the homepage. This is so that we could find out any limitations with our new architecture and find solutions for the hardest problems first, making the rest of the site migration relatively simple.

Diagram of our new micro-monolith architecture

Step 4 — Bye bye PHP!

Since last year, we’ve been migrating the rest of the iPlayer website across to our new Node architecture, whilst at the same time, adding features and improving the user experience. As we migrated each of the pages and created more apps, we did so using an A/B test to check that the new architecture and updated designs did not have an adverse affect for the users. You can read more about how we’ve been A/B testing in my blog post — “Optimising the iPlayer Experience with A/B testing”.

The journey is coming to an end

It’s been a long journey but we’re finally coming to the end, as this quarter, we’re migrating the last few pages to our new Node architecture. We’ve come a long way in the past few years though — we’re now able to serve fully server-side rendered personalised pages from our scalable architecture. Here are few highlights and stats around the benefits of this journey:

  • Our engineers are far happier developing everything in JavaScript and not having to context switch with one standard way of doing things
  • We can respond to bugs far quicker, often releasing a fix within a few hours
  • We’ve gone from 1 release per week to releasing changes as and when they are ready, currently averaging 20 releases per week
  • We’ve gone from rarely running an A/B experiment to running experiments to validate every change we make

Join us

If you like what we are doing with the tech stack and want to make an impact, then do get in touch as we’re always hiring. You can see all the job listings for BBC iPlayer on the BBC Careers website.

--

--

Andy Smith
BBC Product & Technology

Software Engineering Manager/Principal at Nuffield Health. Previously Lead Engineer at Pret and Software Engineering Team Lead at BBC.