How News Organizations Use Algorithms to Decide What to Show You

Partnership on AI
AI&.
Published in
7 min readDec 1, 2020

By Jonathan Stray

Imagine, for a moment, that you’re one of the biggest media organizations in the world. Every day, your journalists create countless videos, articles, podcasts, and more. This wealth of information greatly exceeds what any one person could watch, read, and listen to in 24 hours. So, with all that content, how do you decide what to show your audience?

This is one of many tough questions that two public broadcasters, the British Broadcasting Corporation (BBC) and the Canadian Broadcasting Corporation (CBC), face each day. (The BBC and CBC are both Partner organizations in the Partnership on AI.) To help them answer it, they have partially turned to artificial intelligence. Like many other news organizations, the BBC and CBC now use recommender systems. These machine learning algorithms take large data sets — like, say, a list of hundreds of videos — and sort through them to make customized selections for individual users.

You might be familiar with similar systems that power your Netflix recommendations or choose the next video to play on YouTube. Personalizing the news, however, is a bit different from trying to guess whether a subscriber prefers horror movies or comedies. While a bad movie can ruin an evening, providing readers with a healthy news diet could have consequences for society as a whole.

This is all the more reason to make news recommenders a topic of public discussion and for news organizations to be transparent with their audiences about how they choose articles. Here’s how the BBC and CBC use personalized recommendations in their products, how they think about issues like timeliness and diversity of content, and how they address the problems that can come from optimizing for engagement.

Up Next

At their most basic level, recommender systems are an acknowledgement that not everyone wants the same thing. A human editor might be able to highlight what they consider to be their organization’s best work, but it would be impossible for them to personalize these selections for each person in an audience of millions. The pace of production at a national news network only makes the task more difficult: According to Gabriel Straub, head of data science at the BBC, the broadcaster produces about 2,000 pieces of content each day but only has around 100 slots to display that content online.

“As a public service organisation that is funded by a license fee, it is really important to us to be relevant for everyone across the UK,” said Straub. “This means that we need other ways to showcase the breadth and depth of the BBC.” Recommender systems are one way they can do that.

Christopher Berry, director of product intelligence at the CBC, offered a similar explanation, saying, “CBC produces a lot of content, and we want to empower Canadians to find the content that’s relevant and engaging to them.” According to him, the sum of all the attention that Canadians give to media is growing by less than 1% each year. As a result, there is “ferocious competition for the attention of Canadians” with numerous publications, podcasts, websites, and TV shows vying for their interest.

Currently, said Berry, the CBC uses recommender systems on CBC Listen for podcast discovery, on its short-form video player, on Radio-Canada’s Tou.TV service, on web article pages, and on its cbc.ca/mycbc site. Over at the BBC, recommendations are similarly used on audio and video streaming services.

“We have also started putting recommendations on some BBC World Service articles to suggest further reading,” said Straub. “And we have an app that is completely algorithmically driven (but editorially supervised) that focuses on showing you short-form video.”

Working Together

At any news organization, offering personalized content recommendations requires collaboration between a variety of departments — which makes aligning everyone’s goals an important consideration.

“The mission of our news and entertainment teams is to Inform, Enlighten and Entertain Canadians,” said the CBC’s Berry. “The mission of marketing is to find audiences. The mission of my group, product intelligence, is to reduce the friction to getting to the content people need, want, and enjoy. There is alignment between the missions and the outcomes, though each group faces different constraints.”

For Straub at the BBC, the broadcaster’s editorial guidelines, a written list of standards that apply to all BBC content, act as a set of unifying principles guiding everything they do.

“The BBC has been around for almost 100 years and in that time has developed a lot of experience on how to make editorial decisions,” he said. At the BBC, any use of machine learning must take these editorial guidelines into account. “Therefore our algorithms are built by a cross-functional team that includes product, project, engineering, data science, architecture and editorial.”

According to Straub, editorial team members are there to help evaluate algorithms during development, guiding their direction. Depending on the product, editorial is often part of the sign-off chain as well, and must approve a new recommendation engine before it can go live. Additionally, Straub said, editorial helps define “so-called ‘business rules’” for the algorithm, such as, “don’t show content in a language we haven’t seen the user consume before.”

Solving for X

From a technologist’s point of view, what recommender systems are trying to solve is an “optimization problem,” according to Berry at the CBC. “Given the flow of attention coming into CBC products, what is the optimal way to increase the total amount of informing, enlightening, and entertaining?” But even with an algorithm, news organizations (or anyone else using recommender systems) still must decide what quantifiable values they will track and maximize as proxies for their larger goals.

The BBC’s Straub identified three metrics his organization pays attention to. “We use a mix of recency (we prefer to surface more recent content as this is usually more relevant), diversity (the stream of content recommended should be from a diverse set of brands and cover a diverse set of topics), and accuracy,” he said. “Our accuracy metrics vary by product, but one example would be hit rate: we take part of a user’s history to provide recommendations and then compare that to what content they consumed in the rest of their history. By comparing the recommended list with their consumed list, we can get a sense of how close our recommendations are to their interests. “

Berry similarly pointed to the need for tracking how helpful the recommendations really are, saying, “It’s important that the recommendation serve up something that was relevant enough to be fully engaging, not just clicked upon.” To assess whether an experience succeeded, he said, the CBC uses indicators like how far a user reads down a page or how much of a video a user watches. “Users exhibit their interests through their behaviour,” Berry added. “If they are very interested in local content, they will see a lot more local content.”

Additionally, while both the CBC and BBC view engagement as an important metric, engagement alone cannot tell a news organization if its content is having the impact it’s hoping for.

“Engagement is a signal of enjoyment and we use that metric towards our mandate,” said Berry. “But it has been observed, and it’s generally accepted in the literature, that sensational headlines drive curiosity and interest. Moreover, sometimes the information that is important to inform and enlighten is angering, because that’s just the nature of reality.”

For him, finding out how to inspire curiosity without sensationalizing is “a more interesting question” than how to maximize engagement for its own sake. “Where we seek to make a meaningful contribution is in helping Canadians through their journey to accomplish whatever goal they’re seeking to achieve,” said Berry.

Straub at the BBC said that engagement is one metric they optimize for, in addition to being something they measure when tweaking their algorithms, but that “recommendations are only a part of the way customers discover content from the BBC — editorial decision-making remains key for us — so we try to look at how happy our audiences are with the overall product experience in addition to how satisfied they are with their recommendations.”

“At the end of the day we are in the business of creating and distributing content that people want to consume,” he said.

Serving the Public

As public broadcasters operating on behalf of millions of unique individuals, the BBC and CBC see recommender systems as a useful tool for serving their diverse audiences’ diverse needs. Nevertheless, both Straub and Berry recognize that it’s a solution that doesn’t come without its own difficulties.

Asked about the biggest challenges in creating quality personalized recommendations, Straub at the BBC highlighted the importance of transparency, saying, “We want our audiences to have real agency over their experience and to be able to control what machine learning does for them. This means that we need to be really good at explaining how and why we collect data and what we use it for.”

“But it also means that we need to be very clear about what we think are acceptable uses of machine learning,” he added. “For this we have developed the BBC Machine Learning Principles to guide our development teams. But this is still a new area and we are sure that we haven’t yet discovered all the key questions.

Asked the same question, Berry at the CBC spoke about how recommendations fit into the broadcaster’s institutional goals. “Public broadcasting is aligned with the state’s mission of cohesion: to keep us together in spite of all the forces that pull us apart,” he said. “How should all of this information be sorted so we can see each other?

“There’s a lot we don’t know and many mysteries left to be solved,” Berry added. “We spend time actively listening to concerns, mitigating them, and finding paths forward to production. This feels like forever work, and we believe it’s worth pursuing for the public good.”

--

--

Partnership on AI
AI&.
Editor for

The Partnership on AI is a global nonprofit organization committed to the responsible development and use of artificial intelligence.