For the past 3 years, the Radio and Music Services (RMS) team has been making APIs for iPlayer Radio, BBC Music and podcasts in the Radio and Music metadata space. We are an Agile Software Development team made up of 8 Software Engineers, 3 testers, a Product Owner, Project Manager and Business Analyst. Our vision is:
“We encapsulate business logic for Radio & Music products on all platforms. We add value by providing the right blend of metadata, reliably and fast“
In 2017 the Radio and Music department were tasked with building a new personalised audio product built using an API first approach. I’m going to talk about what that means to us and how we went about building Sounds in the API.
In the past typically we have made API endpoints designed for specific features our clients would request from us, they would write or modify existing client code to integrate with this and then deliver the feature. Clients would still integrate with various BBC APIs to build products but the benefits of having a platform independent API to provide a single integration point to these services were obvious. We decided for Sounds to go one step further and define the layout and content of the product in an API.
We love Scala (after we overcame the learning curve) and our APIs are built using predominantly Akka HTTP and Akka Streams. We’ve found these tools provide excellent performance on modern cloud server architecture, and are scalable, resilient and a fantastic fit for the kind of concurrent retrieval of data we need from various BBC systems and databases.
Programme metadata is our bread and butter, and customising that to each users’ needs is at the core of Sounds. We have, at the time of writing, roughly 350,000 pieces of available audio we want users to be able to access. We think you should be able to serve up some of the oldest content, Britain declares war on Germany just the same as the latest Breakfast show with Greg James
Do one thing and do it well
We’ve tried to stick to this principle to give us the flexibility to deploy and scale services individually
The internal architecture at the BBC means that we typically get personalised information to hydrate programmes with from various sources:
- User Activity Service for play history, Bookmarks and Subscribing to Programmes
- Recommendations API for programmes recommended to a user from their listening history
- BBC Account for authentication to the BBC cross product Identification system
We use the API composition design pattern for returning most of this content. The Personalised Programmes and Recommendations services are examples of this. We pass a users authentication token to an external service and get a list of programme identifiers back, then verify that those programmes are available and return a list of programmes back upstream.
A piece of work undertaken by a combination of Engineering, UX, Product and Project teams across RMS and client teams was to simplify the domain objects returned by the API. We hedged our bets that we would mostly be returning a combination of lists of programme content or single entities. We whittled it down to 4 main types:
The core of the product and an object you will see everywhere. In a nutshell, something you can play. It can be an episode, clip or live stream but clients don’t need to know that
A piece of content, image, link, text or placeholder
A container of other programme objects. Typically this can be a programme brand or series, a category, radio network or editorially curated collection of programmes
Programmes with specific dates and times that are linked to Playable (on demands or live streams) or Display items (typically things that aren’t available yet)
Each one of these object types was refined to return all necessary information but ONLY what was agreed with all teams. Making them as small as possible meant we optimised the size of content, efficiency and consistency of our responses.
We thought another good bet would be to define the experience of the product in the API and maintain a consistency across platforms. This also means there is one place to change layout, messaging and internationalisation. Clients are then free to apply their expertise in rendering that view in the best possible way for their platform. We began investigating returning all content required to load all the metadata content page in one request, benefitting mobile clients in particular. The continuing improvement of our services gave us more confidence in their performance and response times, that we could manage this all inside the Experience API and deliver all content apart from the platform specific layout on each. We started to develop this and found that not only could we improve page load times and latency, but client device and server performance, simplifying client code. This works particularly well for Search results, the Sounds homepage (Discover), Containers (Brand, Series and Collections) and My Sounds in the app.
It’s been an enjoyable (but busy) year, with lots of interesting challenges and fun problems to solve. The team are really proud of their work and hope our users will get the benefit from it whether they’re enjoying Sounds through the mobile apps (Android, iOS) or the web.
In 2019 we’re looking to personalise BBC Sounds even more through audience segmentation and releasing new features through multivariate testing, providing true cross platform Multivariate tests and measurement capabilities to our clients. We’re also looking at content discovery feeds such as popularity allowing the audience to find more of the audio content they love and improving the ‘continuous play’ experience. We’ll let you know how we get on.
If you are interested in working on projects like BBC Sounds or a wider variety of exciting, complex and large scale problems, come and join us: Latest jobs in D&E TV & Radio