Are recommendation engines getting it wrong?

Why we need them and what they’re missing

Thomas Tennyson
The Fourth Wall
5 min readMay 1, 2018

--

We’re now two decades into The Golden Age of Television. Humanity has produced more hours of video content than any one person could watch in a thousand lifetimes. So how much of that time are we doomed to spend asking ourselves “What should I watch next?”.

Therein lies a challenge for anyone with a library of video. How to give users the power to find content they’ll love, whether they know it exists or not? Features such as watch lists, favourites, browse by category, and search are all now expected in VOD interfaces. They give users control, allow them to order it how they see fit, and find exactly what they’re looking for.

Playing the scrolling game

What happens when a user runs out of content on their watch list? If you’re anything like me, you know what it feels like to endlessly swipe through these platforms in search of “something”. Maybe you find “something”, maybe you don’t.

The thrill of the hunt is a concept that doesn’t last long in this domain. It quickly gets replaced with the frustration of not finding something to watch.

Definitely comparable with insomnia.

The ideal scenario

Imagine for a moment we could perfectly customise a library for each user’s personal taste and preference. So they could enjoy it in descending order of entertainment value. Allowing content providers to bask in the highest possible customer engagement and satisfaction.

Delivering the right content, at the right time, to each and every user sounds like a near impossible task. This is where recommendation engines come in.

These engines work behind the scenes to order content based on the information it can ascertain from a user. The more sophisticated ones usually work something like this…

To begin with, a user might select a handful of their favourite shows. Then every relevant interaction is recorded as they use the platform. Such as how much time they spend watching a certain genre of film, or the rating they gave a series.

An algorithm processes this data, compares it to other users, and suggests content you’re likely to enjoy. The more data collected, the smarter it becomes, and the better its recommendations.

Netflix takes this a step further, fitting users into curated “taste groups”. Of which there are thousands.

Why are so many people unhappy?

Despite Netflix being the gold standard—thinking about personalisation beyond recommendations—users still notoriously complain about their recommendations. Even with all the data they have, it still suggests content we’d never watch.

We don’t have access to that data, and we can’t propose to have the solution (if one even exists). Instead, we can highlight some of the shortcomings with recommendation engines today. And open the door to thinking about ways to solve these problems through design.

Profiles for individuals and groups!

#1. We don’t always watch alone

Do your viewing habits change around friends, family, or a significant other? Have you ever watched something you wouldn’t usually pick for someone else’s pleasure?

Recommendation engines don’t discern between something you watch as a group or as an individual. I’ve taken this problem into my own hands, creating a sub account for the two of us to watch together.

This is likely a common hack of Netflix accounts, but does it help groups find something good to watch? The results feel more like a compromise of tastes than a feat of machine learning.

Does minutes watched really matter?

#2. Watching doesn’t imply enjoyment

Ever forced yourself to watch a few episodes of a series your colleagues won’t stop talking about? Or started off liking something only to hate it later? The thing is, unless you tell the recommendation engine otherwise, it thinks you liked that content.

They’ll always need more context than the amount of minutes you spent watching. That’s where the like or dislike functionality comes into play.

But is this enough for a user to express their opinion? The complexity of our reaction to content can’t possibly fit in this dichotomy.

Time makes fools of us all.

#3. Changing taste and mood

Video on demand hasn’t been around that long, but how long does it take for your viewing habits to mature? A year? Five? What you once considered prime viewing can quickly turn into cringe worthy nonsense. And what you like today won’t necessarily be what you like in 6 months.

Or, how does what we want to watch change from day to night, weekend to weekday, from summer to winter? Human factors can totally change the types of content we want at any given moment.

Bubbles can backfire.

#4. Content bubbles

We all know how social media platforms show us content that conforms to our biases. Similarly, data that powers recommendations filter out content it believes wont interest you. Possibly in a way that works, but almost always in a way you didn’t ask for.

How much content will it keep out of reach because it’s labeled as one thing and not the other. Separating us from content that will challenge us is as destructive as the problems in social media we’re all well aware of.

Do you trust the system?

Recommendation engines will almost certainly get better at what they do. But how much better do they have to get before we can truly rely on them? Do the issues above need to be addressed? How would they fit into the equation?

Is it possible to design for this level of complexity, such as a person’s mood? Can external factors be addressed to create a better experience? And can recommendation engines evolve with us as our tastes mature?

We have more questions than answers. Let us know what you think in the comments below. Follow us for more upcoming pieces on recommendation engines, and loads more.

Thanks for reading! Please let us know your thoughts in the comments below. Your claps are always appreciated.

Make sure to follow us at The Fourth Wall for the latest from our team, bringing you our thinking on interactive digital media and products.

--

--