Building a WebAssembly Runtime for BBC iPlayer and enhanced audience experiences.

Juliette Carter
BBC Product & Technology
7 min readFeb 25, 2021

In BBC R&D, we are investigating how we evolve our current multimedia applications to move beyond video by using Object Based Media (OBM). OBM allows us to develop future audience experiences which are immersive, interactive and personalised.

There is an ever increasing number and range of audience devices capable of playing back OBM experiences. The challenge we now face is the one of universal access — How can we get all members of the audience to enjoy OBM experiences on any device, and how do we do this sustainably, and at minimal costs?

The Render Engine Broadcasting (REB) project has therefore been investigating new technologies that will allow the BBC to deliver these OBM experiences at scale to all BBC audiences, no matter what device they use. Our ultimate goal is to be able to deliver real time and fully rendered experiences on any device or platform, and write the software to do it only once. We have been investigating the use of WebAssembly as a cross-platform technology for this.

What is WebAssembly?

WebAssembly (wasm) is Universal Binary format designed as a sandboxed environment and a portable compilation target, which means that the same wasm module can run securely on multiple platforms. A number of strongly typed languages such as C/C++, Rust or AssemblyScript can compile to WebAssembly, making it language agnostic. This makes it an attractive option for adoption in the industry as it enables developers to use languages they already know to produce wasm binaries.

When WebAssembly was first developed a few years ago, its main target platform was the web. The aim was to be able to compile fast and efficient system level code and have it run in the browser. Compute intensive applications, such as real-time interactive rendered graphics could therefore be run in a web browser, at near native performance. This also enabled some native application to be ported to the web, increasing their reach and usage. These include Google Earth, which renders 3D representations of satellite image in the browser, and AutoCad, which now offers a WebApp to create and edit CAD drawings.

In the last couple of years WebAssembly outside of the browser has been gaining traction. A number of native Wasm runtimes have been developed, which has enabled the use of WebAssembly for micro-services and server applications. In 2018 the website security company Cloudflare announced the use of WebAssembly on their edge workers, allowing users to deploy secure and fast serverless code compiled to wasm. Fastly, an edge cloud platform provider, offers new wasm-based edge computation using their native runtime Lucet.

The portability of WebAssembly across multiple platforms and its security model are the key reasons for BBC R&D’s interest in using this technology as a compilation target for media experiences. As a public service broadcaster, we need to be able to deliver value to all of our audiences, regardless of the device they use. Where traditionally a codebase for each target platform and a different team to maintain each codebase would be required, the use of WebAssembly potentially allows for a much more sustainable developer ecosystem. It enables media software applications to be created once, from a single codebase, compiled to WebAssembly and deployed on any client or server platform depending on the capabilities required. It also offers numerous advantages in comparison to previous multimedia or cross-platform technologies (such as Flash, or Java Runtime Environment). Indeed, it is language agnostic, security focused, has predictable performance, and works inside and outside the browser. WebAssembly is also an open standard, which encourages its adoption.

How have we used WebAssembly?

We wanted to demonstrate how WebAssembly could be used to deliver media experiences that can run on many target platforms, built from one single codebase. To do that, we implemented an example media application written in C++ which we compile to WebAssembly, giving us a wasm module. We designed this application to look like a version of BBC iPlayer, which allows users to select content, watch video programmes, as well as play back OBM experiences. We call this application the Single Service Player (SSP).

Screenshot of the Single Service Player (SSP)
The Single Service Player — our example media application

To run our SSP wasm module, we needed a wasm runtime. The SSP makes use of some low-level media functionality, which isn’t scoped by the WebAssembly specification. To enable wasm modules to make use of these low-level multimedia capabilities, they need to be implemented in the runtime and made available to the wasm module through a set of imports. Examples of such capabilities include:

  • Windowing and rendering — In the majority of cases, a multimedia application will have some graphical elements to it, which requires things to be drawn in a window (such as video frames, or a UI screen).
  • User inputs — An interactive multimedia experience expects user inputs, such as keyboard or mouse events.
  • Media encoding and decoding — To efficiently encode and decode media (such as video frames or audio packets) it is preferable to make use of the host’s hardware resources where possible.

As there is currently no WebAssembly runtime which offers these media capabilities we’ve decided to create our own.

There are already some efforts in specifying ways a wasm module can talk to the host. WASI (the WebAssembly System Interface) proposes a set of standardised POSIX-like syscalls (the programmatic way in which a computer programme communicates with the host system) for libc functionality, mainly file handling and networking. These are are called from the wasm module, and implemented in the runtime.

We decided to use a similar approach to allow our SSP wasm module to communicate with the host, enabling it to have access to low-level media functionality. This involved identifying all the platform-specific media capabilities that could not be compiled to wasm and implementing them in the runtime. These capabilities were then made accessible to the wasm module through a set of platform-independant syscalls passed as imports.

This figure illustrates the whole process, from writing a media experience as software (such as the SSP) to running it as a wasm module on any device. The steps are detailed below.

Block diagram illustrating compiling and running a media application as a wasm module

The first step was to design the multimedia sys-call API behind which our cross-platform multimedia capabilities would be implemented in the runtime. Careful consideration was needed to ensure it was thread safe, and honoured the wasm security requirements around memory access. In the figure above, we use reb_decode_video() as an example syscall, which our SSP application can make use to access low-level multimedia functionality, such as utilising the system’s hardware for video decoding.

Our SSP code was compiled to wasm using the clang compiler and the wasi-sdk toolchain, and the required syscalls are added as imports to the wasm module.

We then built the multimedia wasm runtime, consisting of two parts. The first one is the execution environment for wasm modules, which provides us the capability of loading and running a wasm module. For this, we embedded Wasmtime, a ByteCode alliance project based on Cranelift, which generates the machine code for the target platform from the wasm binary.

The second part of our runtime is the implementation of the low-level multimedia functionality. For this, we created a cross-platform C++ library with input detection, networking, windowing, graphical rendering, and media decoding, which sits behind our carefully designed syscall APIs. We were able to compile our library for a number of target platforms, such as Linux, MacOS, Windows, Raspberry Pi and Android. We also wrote some glue code to connect the two parts together.

Where do we go from here?

A wasm runtime capable of executing multimedia applications opens a lot of possibilities, principally around flexible compute. Flexible compute allows us to execute computationally demanding applications, by dividing up to workload between available resources. These resources could be located locally (a laptop, games console or phone in your house), in the edge, or in the cloud.

As we move towards delivering fully rendered real-time interactive experience, the flexible compute approach becomes an attractive solution to the computational demands of such applications. We could for example consider segmenting a rendered frame into several tiles, or objects, each of those rendered on a separate available compute resource. A lot of systems approach this problem by running specific compute tasks in containers across the available devices and platforms. We are hoping to use our work and accrued knowledge in developing the wasm multimedia runtime to investigate a viable alternative to the container approach for distributed media applications. We are looking into using wasm modules to perform secure and fast computation on any remote compute nodes.

Our runtime, capable of performing media services such as rendering and decoding or encoding of rendered video frames, can therefore be used not only to display the final experience to the user on a client device, but also to execute the remote computational tasks as wasm modules. Using WebAssembly combined with a flexible compute approach, we hope to develop technology which allows audience to access any future experience, regardless of the devices they might have at home.

--

--