Visualising Audio: BBC News Labs’ Adaptation of the Audiogram Generator
In 2013, the Director General of the BBC, Tony Hall, set the Corporation an ambitious target — almost double our global audience to half a billion people by 2022. Audio plays a big role in achieving that goal. With a dramatic expansion in BBC World Service, coupled with a mandate to drive growth on mobile and social platforms, we faced a question audio producers around the globe are grappling with…
How do we harness the power of audio to expand our reach on platforms that are, well, visual?
In 2016, WNYC launched their latest innovation in social audio, a tool designed to tackle this very conundrum: the Audiogram Generator.
The social team at New York Public Radio experimented and iterated through various solutions, ultimately settling on a simple, web-based tool that takes audio files and generates video with animated waveforms. Judging by their Medium post, the tool appears to have become an enormously successful means of promoting their audio content on platforms that are inherently video-biased.
Then they open-sourced their efforts so the rest of the world could freely re-use and expand on their great work. So I did.
Repurposing Audiogram for the BBC
We trialled WNYC’s original code with a couple of teams around the office, and established a few internal clients who would benefit from a bespoke implementation. There were three main aspects I looked at evolving…
- Integration with other internal BBC systems
- Transcription and subtitling
- Customisable themes/designs
Streamlining Audiogram into existing production workflows was key to encouraging staff to start using it routinely. Journalists and producers have enough applications to juggle already, so we wanted to provide seamless inward and outward journeys that would slot Audiogram into the publishing pipeline.
In addition to the original file-upload option, we introduced two further audio import routes: pulling clips from our studio playout software (VCS dira!), and directly importing audio from broadcast media that has been simulcast online. Links and export options out of other BBC tools into Audiogram also allowed users to launch the app without having to navigate to the website directly.
I also integrated two image systems. Web:Cap (the BBC’s tool for generating video/image overlays) enables users to overlay standardised components like attribution labels or name straps. iChef (the BBC’s tool for serving responsive images) further allows users to pull in background images by specifying an image’s ID rather than upload it manually.
It was also important for us to be able to monitor the use and health of the tool. Slack is an increasingly popular messaging platform used by development, journalistic and editorial staff. A Slack bot therefore seemed like the perfect means of logging Audiogram activity, and notifying us when new Audiograms are ready or warnings/errors occur.
The Slack integration helped us respond to errors swiftly, even before the user had raised an issue with us. It also helped guide the development process. For example, we noticed users were uploading video files as audio sources, and then later uploading the same video file as a background source. In response, we added an option to upload a single file once for both the audio and the background.
Subtitling was a biggie.
Scrolling past a moving Audiogram video on your social timeline is nice, but having that video properly subtitled can make it even more captivating. Facebook is tricky with when/how it overlays real .srt subtitles, and other platforms don’t offer the option at all, so we wanted the ability to burn subtitles into the videos directly.
Burning in subtitles allows us to make the words a real design feature of the video, not just an accessibility afterthought. Crucially, as most platforms auto-play video silently, it also provides an opportunity for the audience to engage immediately, without the need for audio at all.
The problem with subtitling, is the process of transcribing and properly time-coding text is slow and expensive.
Thankfully, we have some pretty cool transcription technology (based on a BBC R&D variant of the Kaldi toolkit) that auto-generates fully time-coded transcripts for us. It’s not super quick (around real-time), but it’s significantly quicker than a human transcription. Audio imported from our VCS playout software is also already auto-transcribed, which makes the process effectively instant.
Producers can edit and tweak the script, correcting typos, punctuation and distinguishing between speakers, all within the web app. The transcript also offers an alternative means of trimming the audio, without having to fiddle with the waveform.
In addition to, or instead of, burning subtitles in, we generate exportable SRT and EBU-TT-D subtitle files (for Facebook and BBC publishing respectively) to maintain accessibility standards.
Finally, we added the ability to fully customise the design and format of the video via the web interface.
Was that a good idea? Yes and no.
Yes, because without giving teams the flexibility to customise the design of their videos, they wouldn’t have adopted the tool as quickly. We gave access to dozens of people across the Corporation, all of whom required bespoke formatting. As the only developer on the project, I needed a way of enabling users to design their own themes without having to code them all myself.
No, because it put creative control in the hands of every user, bypassing the formal design staff and trusting producers to conform to their team’s standards. We mitigated this by allowing users to save their customised themes. Team leaders would design a theme for their sub-brand, save it, and instruct their producers to use that theme by default in the future.
Exposing all the formatting features on the front-end produced a fairly cluttered UI (apologies to any UX designers scoffing at me right now). However, because this initial iteration of the tool was developed as a prototype without the full technical support users could normally expect of live production software, it was important to offer all these configuration options and remove the dependency on development staff.
More than just social media
The Audiogram project first launched as a tool to showcase audio on video-dominated social platforms, however it can be used just as effectively to visualise audio in other situations — natively on the BBC website, on BBC iPlayer, or on broadcast television (like replaying 911 calls on the news for instance).
With so much interest in the auto-transcription feature, the tool has also been used to simply burn subtitles into existing videos quickly.
Measuring success is tricky. We weren’t previously posting audio-only content on social media to the same extent as WNYC, so it’s difficult establishing a benchmark on which to evaluate the performance of these new videos. Because they’ve been introduced alongside other new digital initiatives, attributing broader social growth to them is similarly difficult. It has driven an increased volume of online content however, all of which positively contributes to our reach objectives.
What we can say, is that Audiogram has given producers without access to professional software a simple web-based tool to generate video. As a result, content is published more quickly and journalists don’t need to rely as often on specialist teams to render their graphics. The time saved there is a success story itself.
There are a few more features we’ll consider incorporating — manual transcription (for when the audio quality is too poor for an automated one), and Ken Burns effect background slideshows for instance. Now though, I’m excited to sit back and watch how various teams across the BBC utilise the tool.
We’ll be open-sourcing our efforts, and look forward to collaborating with other Audiogram users from organisations around the globe. Join the conversation using #Audiogram, and get in touch via @BBC_News_Labs and @JontyUsborne.
About BBC News Labs
Founded in 2012, BBC News Labs is an innovation incubator charged with driving innovation in news. We are a multi-discipline team, exploring scalable opportunities at the intersection of journalism, technology, and data. We work closely with BBC News, BBC News Product & Systems and BBC R&D, while collaborating with news organisations and research institutions globally.