Our users want algorithms — but not necessarily news algorithms

It’s no use trying to personalize the news user experience, if people want the Facebook algorithm.

The citizens have bigger trust in news media than in social media. On the other hand, the majority prefer to have an algorithm pick the news for them instead of a journalistic prioritization, a new global report on media consumption shows.

The above quote is from an article published on Journalisten.dk (the magazine for danish journalism, communication and media photography) on June 22nd titled ‘Danes have bigger trust in news media than social media, but…’.

Personalization of the user experience (via algorithms) at media websites has been talked about and looked at for a long time, in Denmark as well as the rest of the world. For example, Journalisten.dk has previously written about it in 2014 (‘Vi know where you are — her are the news’, about Jyllands-Posten) and in 2012 (‘Online news will be targeted for you’, about Danish tabloids Ekstra Bladet and BT).

The interest from media companies in targeting information and content has multiple causes. First, the media has an interest in getting traffic on the website distributed across as many articles as possible — especially articles which the users overlook in the daily bustle of a news website. Second, there is the commercial aspect; ads etc. can be targeted — both when people are logged in and not; because it’s also about getting the users to tell a little something about themselves.

In Denmark not a lot has happened yet, though. Again, the causes are multiple. One of the major reasons is that it requires technology which very few (especially in Denmark) media companies are capable of developing and building. The easiest way to go would be to team up with a tech-provider or -partner (which, in essence, is what many media companies are doing by letting data companies inside the door in relation to ads…) but it’s risky, because who owns the data?

What should the algorithm do?

Before we in the media industry start to personalize our users’s experience with various algorithms from different tech-providers, it’s worth taking a look at what algorithms can do; we have to be careful not to overestimate them.

The most famous algorithm is probably the one behind Facebook’s ‘News Feed’. It certainly is the one most visible, most talked about — and the algorithm with most control over our lives.

But there is a huge difference between what Facebook’s ‘News Feed’ algorithm does and what the media companies are setting out to do. Your ‘News Feed’ is very much affected by what your friends are doing. It practically draws you in the direction of the content your friends are creating, commenting on and reacting to.

And since Facebook knows which of your friends you are most in contact and interaction with, it can combine these two to a pretty efficient feed where you might not even think about what you’re missing.

Facebook’s algorithm is very social.

A news algorithm is something else. It must try to get to know you and find out, what you are interested in. It doesn’t know which articles on a given website a lot of your friends have read and shared. That information resides with Facebook.

Another well-known algorithm is the one used by Google to sort search results. But you only reach that algorithm when you have told Google what you are interested in. Yes, Google knows a lot about you (and they use it to target ads — with… varying success), but the algorithm will only help you find the information you are looking for (the answer to your question), when you are on track.

The news algorithms output will be its best guess on which content might interest you based on what you’ve previously read and what else it may think it knows about you. This quickly becomes guessing (consider this: How much do the articles you read really say about you?). Facebook’s algorithm can base itself more on facts; it is your best friend who has commented on that update.

It’s also about the content

A news algorithm introduces yet another complexity. It’s not just about getting to know you (the user) — it also has to learn and know things about the content at a website/media.

Getting information out of the users is one thing. Something different (and overlooked, if you ask me) is getting information out of the content. By that I mean finding out (preferably programmatically) what an article is about, so you can push it to people interested in the same subjects.

As far as I know only very few media companies have success with “tagging”, where you describe the content using keywords. Quite a lot of websites are using them, but when was the last time you navigated to a webpage at a news website with nothing but articles on a given keyword, for instance ‘FC Barcelona’ or ‘The financial crisis’?

The lack of interest from the users also has the side-effect that the information architecture behind a lot of online media’s content isn’t always maintained sufficiently. Maybe the articles are sectioned based on a sender-orientered structure decided upon by the editorial room several years ago. This is where ‘Conway’s law’ makes sense:

organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations

This is the same reason why the main navigation at several media websites sometimes will be a mirroring of the internal organizational structure.

What all of this means is that even if you get people to tell you what they are interested in (or you get a machine to guess it), it’s really to no use if it’s too hard to find the right content. ‘Content is king’, as the mantra says. But it’s only king if it’s easy to locate.

Are the users in on it?

Of course, all of this is only relevant if the users are actually demanding personalized content. Are they? Well, that’s the big question.

According to the survey I mentioned at the beginning of this article, the users do ask for algorithms. But algorithms can do all kinds of things:

  • They can help you find the content and topics your friends care about — like Facebook does.
  • They can help you find the best source with the answer to your question — like Google does.
  • They can help you get an overview of the news — such as services like Google News are trying to do, without really succeeding.
  • And so on…

We have to know more about where in their media consumption people want more algorithms — and then start from there. An example: I don’t need a service that recommends me content on Netflix etc. I need a service that tells me what my friends are watching on Netflix, Viaplay, HBO etc. etc.

‘Most read’ lists are popular because they show people what others are reading and caring about. It’s a ticket to joining the talk over lunch or next to the water cooler. All of us probably want to know about just some of the things our family, friends, acquaintances and coworkers care about.

The human editors has a different role. Their job is to show people the content they need to see— that which can help them understand what is happening right now, or has taken place over a period of time. That’s the ‘job to be done’ I have, where I “hire” The Economist to tell me what has happened in the past week on the macro level in politics, society, technology and business.

Our work with algorithms should be based on what our customers and users want them to do. If I for example want to know which articles/stories my friends are excited about right know, I don’t care what some algorithm thinks a person with my profile wants to read.

Algorithms and people need to be hired for different jobs to be done — that statement should be part of the content strategy if we are to use algorithms properly.


This article was originally published on my blog, Medieblogger, in Danish.

Sources / Read more:

# Journalisten: Danskerne stoler mere på nyhedsmedier end sociale medier, men… (June 22nd, 2017)

# Journalisten: Vi ved, hvor du er — her er nyhederne (Februrary 19th, 2014)

# Journalisten: Netnyheder bliver målrettet til dig (June 7th, 2012)

# Wikipedia: Conway’s law

# Photo: Markus Spiske raumrot.com / Pexels