Why aren’t recommendation engines very effective despite today’s technology?

Alexandre Robicquet
The Startup
Published in
8 min readNov 22, 2019

On average, Netflix users waste 21min searching every day. With 300 million users, that is more than 38 billion hours lost per year. So how that translates to you, the individual is ~126h of your life per year lost in indecision.

Recommendations (in their current form) generally suck. Machine Learning and applications are supposed to make our lives easier and save us time, but the user experience is often disjointed. This isn’t surprising since the algorithms are not built for you, but to make money from you. Personalization, alternatively, is built specifically for you: a small distinction, big difference.

Now let’s dive into the details.

Photo by Austin Chan on Unsplash

Recommendations are everywhere, but still…

Remember, advancements in technology have made it easy to overlook the seamless way recommendation systems are incorporated into practically every device and platform. Everyone from Facebook to Target to Walgreens collects your data, analyzes it, and nudges you to buy products you may or may not need or want.

Simultaneously, the number of options consumers have access to is exponentially growing: millions of songs are available on Spotify, thousands of shows and movies are streaming online, and there’s an unlimited number of restaurants to try.

This sensory overload means people need “information filtering systems,” which helps their minds sort through the junk.

Finding vs. Looking: When searching through content, there is a difference between ‘finding’ and ‘looking’. Finding is what we do on our own, the final moment when you make up your mind. Since we rarely receive one recommendation spot-on, we typically scroll through a curated list of suggestions and then decide. Looking, on the other hand, is this process of searching. And this is where the recommendation engine starts to have its real significance. Recommendation engines are by default “information filtering system” to help us looking for something, and then decide.

Photo by Clem Onojeghuo on Unsplash

The Paradox of Choice: Having access to an overabundance of content triggers real stress. In his book, Digital Minimalism, Georgetown professor Cal Newport discusses how constant social media use often causes users to feel overwhelmed and burnt out, which makes sense considering many of these platforms have neverending news feeds. The bottom line: access to too many propositions triggers real stress. American psychologist Barry Schwartz first introduced this concept in his 2004 book, The Paradox of Choice. In the book, Schwartz makes a compelling case that an overabundance of choices paralyzes consumers and creates real anxiety and frustration. Having too many options — whether it’s in online dating, social media platforms, or content recommendations, prevents users from making decisions.

Photo by Victoriano Izquierdo on Unsplash

This sends us back to our “Netflix Homepage” situation. People are often paralyzed by choices and need more individually designed recommendations. Another factor is the fear of making a “sub-optimal choice.” The FOMO (Fear Of Missing Out) on the right items, which also creates deep frustration and un-satisfaction.

This problem runs so deep that the average worldwide Netflix user spent 21 minutes a day or 126 hours a year looking for something to watch! Think of all that wasted time — yikes. This is why personalized recommender systems are meaningful. By filtering in abundance items that would be relevant to one user, you eliminate the noise, reduce anxiety and frustration, and allow users to decide among a pertinent set of propositions.

Systems with Recommendations are often Not Personalized

But if recommendations are presented to me, aren’t they already tailored to my unique set of tastes and preferences? Why am I insisting on the “personalized” aspect? To answer that, we would need to consider a more mathematical approach to recommendation systems.

1 — Typically, the most used mathematical approach for recommenders system is “User-Item Matrix Factorization,” or more commonly called “Collaborative Filtering” or “Social Filtering.” Collaborative filtering organizes information by using the tastes of other people. It is built on the idea that people who agreed in their evaluations of items in the past are likely to agree again in the future. For example, a person who wants to see a movie might ask for recommendations from friends. The recommendations of friends who have similar interests are trusted more than recommendations from others. This can be summarized by the sentence “people who have the same taste as you liked this so that you will like it too.”

But the overall consequences of abusing this approach (like the majority of businesses) have more impact than we think. Individuals are mathematically represented as a group, and groups as masses. A series of clusterization progressively leads algorithms to deliver recommendations based on how similar we are to other users, and not on what makes us unique.

One compelling illustration of this clusterization is found in the case of Netflix, where “more than 80 percent of the TV shows people watch on Netflix are discovered through the platform’s recommendation system.” In their specific case, recommendations are the results of cohort users into ~2000 different taste groups. In other words, for Netflix’s total subscriber base (approximately 250 million active profiles), this would mean that one user is represented as precisely similar to 250m/2k ~125k other individuals.

2 — Another consideration of our uniqueness is the context. And it’s all relative. The most straightforward illustration would be to consider yourself and see how contextual changes affect your decisions and desires. For instance, what music do you listen to in the morning, and what music do you listen to at night? Are they the same, or do time and events of the day influence your decision?

Mood, weather, temperature, sleep, emotions, etc.… All these factors are crucial elements in understanding and predicting what you might want or need at a particular moment. However, these are still never fully considered or even acknowledged by businesses and media platforms.

We have now become algorithmically represented based on the similarities we share with our neighbors in addition to being deprived of our context. Where is the uniqueness in this?

How can one service claim to find the best movie for me tonight if I know there is no way it encompassingly understands who I am, at this moment? This question brings us full circle and back to personalization.

The need to demand “personalized recommendation” becomes equivalent to the need for valuing and acknowledging our uniqueness. Without it, frustrating cycles arise from these interactions, creating problems like “filter bubbles” that hinder self-development and exploration.

The Scary Stuff: current Recommendation engines are NOT here for you.

Many recommendation engines operate in moral hazard zones through the Principal-Agent Problem: concerned with profits and addictiveness, they don’t have the user’s long-term, best interest at heart.

Therefore, by leaving this fight, being at peace with the mediocrity of the recommendation platforms are delivering to us daily, we implicitly give our approval. We agree that “what you [media services] are creating and presenting to me will satisfy my binging addiction for a moment.” The goal of those platforms is to create and promote something that will satisfy the mass, not yourself.

By not asking for better, by not reclaiming our identity, we implicitly agree to be seen as an element of this mass, and not as an individual.

But beyond this moral problematic, here is a more straightforward reason why recommendation engines are not working: recommendation engines are biased. Those algorithms are NOT solely designed to optimize content or services for you. They are designed to extract the most value out of you. For example, Netflix switched to thumbs up or down over five-star ratings, because it doesn’t matter if you liked or loved it, what matters is “what content is the most approved by our community.” Facebook and Google are advertising platforms, and their recommendations are not here for you, they’re here to make money from you.

Photo by Lesly Juarez on Unsplash

Last but not least: It’s much more complicated than what people usually think.

In the last decade, we saw tremendous progress in Machine Learning and applications, sometimes achieving better-than-human performance on specific tasks. Yet this progress is heavily skewed toward tasks like computer vision, as seen by the ten years interest graph from Google Trends.

Developing a state-of-the-art recommendation model is much harder than image classification models. On one side, this can be explained mathematically. Image classification and similar tasks fit very well into the traditional machine learning framework. Collecting data is cheap, and evaluating a model straightforward. Recommendation systems are not classification or regression tasks, collecting data is expensive, and evaluating them is difficult. You can think of training a recommender system for a million users as training a million different classifiers, where you have only a dozen data points per classifier.

The training data sets are very sparse. On the other side, most tools are built for dense data sets like images. In TensorFlow (the most famous framework to train Machine Learning models), the first support for training on sparse data has been added only recently, years after it became popular. Similarly, TensorFlow still does not provide any implementation of the most common objective functions used in recommendations.

Photo by Chris Ried on Unsplash

If you are a machine learning engineer working in recommendations, you will have to implement them yourself, whereas everything will be built for you if you work in computer vision. Both the difficulty of the task and the fact that there are fewer tools makes it much harder for any team to implement a decent recommender system.

It is with all those considerations at heart that Dr. Emile Contal and myself co-founded Crossing Minds. We have spent our careers seeking to develop contextually-relevant, optimized recommender systems to deliver the best possible personalized recommendations for users at the most convenient time. Through user-filtering, item-filtering, content-filtering, deep learning, and transfer learning, the recommendation systems developed by the company seek to surface the best recommendations unique to each user, not merely for “users like you”. That’s the company’s promise.

We are currently open for sign-ups to our Early Access program for hai, our first experience focused on delivering the best recommendations for media you’ll love but haven’t discovered yet.

To check it out, you can sign-up here.

Thank you for reading!

Originally Answer by Alexandre Robicquet, Co-Founder of Crossing Minds, on Quora.

--

--