Powering BBC Online with nanoservices
A pseudo-serverless approach to making flexible APIs and websites
The BBC has a wide range of websites and apps, ranging from the huge, global BBC News to the specialist, such as Bitesize. Creating this broad variety of experiences, in an affordable and reliable way, is a big technical challenge. To achieve this, one approach we’ve taken is nanoservices: hundreds of small components, developed and owned by different teams, which come together in different ways to make different experiences. It’s a form of serverless computing, and it’s produced some interesting results.
What is a nanoservice?
A nanoservice is an independent component that does one job well.
Practically, a nanoservice is like a library. It has a name, code, documentation, tests, and dependencies. It is independently developed, tested, released, and versioned. And it often will use multiple other nanoservices to offer rich functionality.
Our nanoservices always have one ‘entry point’ — one way in which they are invoked, which is parameterised. This is often then exposed as an API endpoint, such as /get-weather?location=London.
Internally we refer to nanoservices as templates because they are most commonly used to ‘template’ data. That is, they aggregate, filter, and translate data from one format to another. They may take data from an API and convert it into an HTML component, for example. Or they may analyse multiple data sources to determine what is relevant to show the user.
How do nanoservices compare with microservices?
Nanoservices are typically smaller than microservices (though probably not by 1000 times!) They can both come in many sizes, so there is no hard rule on how much smaller a nanoservice should be. But a nanoservice typically does one thing, whereas a microservice may often do a few things.
For example, a microservice that offers a RESTful API would often have multiple commands (endpoints). Whereas with nanoservices, each command would likely be separate. This allows more fine-grained control on how they are developed, released, and reused.
It is generally considered an anti-pattern for microservices to become too small, as their overhead outweighs the benefit. But as we shall see below, nanoservices have a different approach to deployment, giving them a far smaller overhead. The benefits of smaller, more flexible units now outweigh the costs.
Nanoservices in action
The BBC has over 1,000 nanoservices in production. Together, they create a range of dynamic web pages, and also APIs for mobile and TV apps.
Let’s take the BBC News homepage as an example. About 30 nanoservices help create components on this page. Some are responsible for gathering the data (headlines etc.) Others are responsible for using that data to create HTML. Common behaviour between nanoservices (e.g. to handle CSS) is placed in shared libraries.
Using multiple small nanoservices to create webpages like this has multiple advantages:
- Cross-team development: Different teams can easily work on different components at the same time. They can update and release independently, to a timetable that works best for them.
- Safe: Releases are safer, as they are smaller, and so the impact of a breaking change is less.
- Sharing: Sharing becomes easier. Each nanoservice becomes a building block from which different things are more easily made. Following the principles of the Unix Philosophy, each nanoservice does one thing well, being more modular than a microservice typically is. Which, like a Unix command such as ‘sed’ or ‘grep’, allows it to be composable whilst still behaving like a service that is easy to understand and use.
- Flexibility: We use nanoservices to create both the HTML components and the data layer that powers them. This flexibility allows rapid iteration as the behaviour evolves. The same team can update both the web component and the backend at the same time, connected by versioning.
The BBC’s nanoservices are written in Node.JS, using React to render HTML. But we believe the concept would work well for almost any language, and for more use cases than making web pages.
How it works: a pseudo-serverless approach
Because tens or even hundreds of nanoservices must work together to create something, such as a web page, they need to be extremely efficient. They must have:
- A very low start-up time: as they ideally need to start within ten milliseconds of being requested.
- Very low latency between them: as every millisecond counts when many nanoservices are involved.
These properties are typically unachievable using traditional microservice architecture. The communication latency is usually too great, especially if split across servers, or if done via HTTP. This is why, for nanoservices, we’ve taken more of a serverless approach.
We’ve created a simple internal platform that allows nanoservices to be uploaded, as code. It then allows a nanoservice to be executed on demand, via a RESTful API. There are no containers or servers dedicated to each nanoservice; instead, they all run on a shared platform. Which for us is a set of auto-scaling cloud instances.
If one nanoservice needs to call another, which is very common, the platform internally handles that request. This ensures communication is extremely quick and errors can be handled in a standard, graceful way. To facilitate this communication, we use Redis; it’s super-fast and flexible cache works well as a message bus.
And so, whilst nanoservices share many of the benefits of microservices (smaller, safer units that multiple teams can build and deploy), the operational approach is very different. It is less about running separate instances, and more about dynamic updating of a platform. To the creators of nanoservices, it is a serverless model, even though of course there are servers behind the scenes. So far we have chosen not to use the serverless platforms offered by cloud providers, such as AWS’s Lambda, because they can’t guarantee such a low execution and cross-communication latency. But this is undoubtedly something that will change as the serverless platforms evolve.
Internally, we call this platform Morph, and it runs thousands of nanoservices a second. It is not open source (though we hope to make it so in the future).
This approach is not without its difficulties, mainly around control and understanding. If large numbers of developers are creating nanoservices that are then interconnected in multiple ways, how do you maintain an understanding of what should be happening?
Broadly, the lessons we have learned (though not necessarily solved) are:
- Isolation is hard. Because the underlying platform is shared, tenants must behave. Nanoservices must not be permitted to do something, which could affect other usage, such as run for too long or make too many external calls. For this reason we ‘sandbox’ each execution, although it is hard to do this consistently and without limiting flexibility.
- We’re not sure whether nanoservices should be larger or smaller. Smaller ones can be used more nimbly, but increase the number of distinct things to understand, test, and release. And unless the test and release process is incredibly slick, this can be time-consuming.
- A good nanoservice catalogue is important. Developers need to easily see what’s been made already, and its quality, to make a good decision about what can be reused and what must be developed.
- Versioning is hard. Nanoservices must be versioned, because they quickly become used in many ways and keeping them all on the same version is impractical. But tracking version proliferation is not easy. Our implementation offers semantic versioning, as with standard Node.JS modules, which is powerful but does not encourage a good upgrade policy on its own.
Taking this concept further
Our journey into nanoservices will continue, as we continue to learn how to best to use them in practice. Recent Netflix posts (part 1 & part 2) describe a similar concept. We think that, not withstanding the challenges listed above, there is great potential in this idea. By allowing different teams to quickly contribute and reuse small units of logic, rich and powerful systems can be made that should be easier to understand and operate at scale. And by hosting it on a safe and shared platform, teams are freed up from the overhead of configuring and operating their own services.
As expert tech trend predictor Simon Wardley says, we should expect the whole world to be overtaken by serverless by 2025. That’s going to take some fundamentally new thinking in how we architect and operate large software. In such a world, where the concerns are less about hosting and more about understanding and complexity, why not break software in to smaller, understandable, reusable chunks? The road to serverless will be fascinating, and a nanoservice model might be an architectural paradigm that helps us create serverless software at scale.
Thanks to Jonathan Balls, Jonny Glancy and Mike Smith for contributing to this post.