Wait, how do these algorithms work?

Improving transparency in fairness-aware recommendations

Jessie J. Smith
CUInfoScience
Published in
9 min readSep 8, 2021

--

Have you ever scrolled through social media and noticed advertisements that show you exactly what you’ve been looking for? Have you noticed how platforms like TikTok and Instagram often show you content that keeps you scrolling on the platform for much longer than you were anticipating? Does it sometimes seem like Spotify just ‘gets you’ in a way that Pandora’s music recommendations never did?

If you’ve ever experienced any of this, then you have interacted with a Recommender System. In fact, most people who participate in the digital world have interacted with recommender systems — whether they realize it or not.

In this article, I will be explaining some of the basics of how these artificially intelligent recommender systems work, as well as how they can be designed to treat their users better.

This is a part of ongoing research that I am conducting as a part of my PhD, where my goals are (1) to improve Artificial Intelligence (AI) and Machine Learning (ML) systems to make them more ethical; and (2) to help users better understand how these systems work. Ideally, this research will also help improve the general public’s understanding of AI/ML to help empower everyone in their digital lives.

This article was written as a public-facing summary of an academic paper that was published in The 29th Conference on User Modeling, Adaptation and Personalization (UMAP 2021). The full paper and a presentation by the authors can be found on the proceedings website.

How do Recommendation Systems Work?

In this section I’ll explain some of the key components of recommender systems as they exist today, and how ethical concepts like “fairness” alter traditional recommendation algorithms.

Personalized algorithms know us better than we know ourselves

One of the key ingredients of a recommender system is that it collects information about you in order to recommend relevant content to you.

For example: on a platform like YouTube, the videos that you have watched, the channels you have subscribed to, the likes, dislikes, comments, and your general activity on the internet can all be used in combination by the recommendation algorithm to try to understand what types of videos you like to watch. Once the algorithm has a good idea of what you like, it can recommend videos to you that you might be interested in.

This is called personalization.

Personalization is a great way to keep users interested or engaged in a platform, because it transforms every individual feed into something that is catered to your unique, personal profile.

Platforms like Spotify, Netflix, Hulu, YouTube, Amazon, Facebook, Instagram, and TikTok can thank a large amount of their success to personalized recommender systems that keep their unique users entertained and coming back for more.

Recommendation algorithms trying to be more “fair”

In a personalized recommender system, one of main objectives is to attempt to accurately represent users’ interests. In other words, the objective is accuracy. This means that the recommendation algorithm is trained and tested to try to be as accurate as it can be.

However, there are other objectives besides accuracy that we can try to optimize for in recommendation — one of which is “fairness.” I put “fairness” in quotes here, because there are dozens of definitions of the term, and just as many algorithmic implementations of these definitions. This is helpful to keep in mind while exploring algorithmic fairness, because there is no ‘one size fits all’ here . What is ‘fair’ for some may be ‘unfair’ for others.

It has recently become more common for AI platforms to optimize for fairness, because of different types of users feeling like they have been treated unfairly by an AI platform. In recommender systems alone, many scandals have left users feeling unheard or mistreated by these platforms.

In 2016, many conservative Facebook users complained that FB’s news recommendations were perpetuating liberal / left-wing ideals and suppressing more conservative political views [read the story].

In 2018, many parents on YouTube complained that the recommendation algorithm was sending their children down conspiracy theory rabbit holes [read the story].

In 2020, many LGBTQ+ content creators on TikTok complained that the recommendation algorithm was suppressing their content because of their identity [read the research].

It has become obvious that recommendation algorithms that only optimize for accuracy objectives can lead to some unintended consequences, which is why new research about incorporating ethical objectives like fairness is becoming necessary.

In this research done by myself and my colleagues, we focused on one specific definition and implementation of fairness as it relates to recommendation: provider fairness.

What is a “provider” and what is provider fairness?

Recommender systems are multistakeholder systems, which means that there are sometimes a variety of individuals or groups that can benefit from the delivery of recommendations on a platform.

For example: on a platform like Spotify, the recommendation algorithm caters simultaneously to those who listen to the music (those who consume the recommended music) and also those who create the music (those who provide the content to be recommended). Recommendation algorithms on a platform like Spotify want to make both of these types of stakeholders as happy as possible, even in situations where it is difficult to please everyone.

As someone who consumes recommendations (otherwise known as the consumer), we often forget about other stakeholders and their needs. As a Spotify listener, I usually only care about getting recommended music that matches my interests, and not much else. If I’m someone who loves Beyoncé and no other music, I only care about getting recommended Beyoncé’s music — even if it means that I am not supporting any new, indie artists.

However, if I was a new indie artist who had just published my first album on Spotify, I would hope that the algorithm would recommend my content to listeners in the same way that it recommends Beyoncé’s content.

I would hope that Spotify’s recommendation algorithm would treat me fairly as a content provider. This is provider fairness.

What is consumer fairness?

Now consider a different platform, like LinkedIn. On LinkedIn one of the many recommendation platforms that they host is the job recommender. Employers post new job listings that they have, and LinkedIn users get recommended the job listings that are most relevant to them. If I am a female user on LinkedIn searching for a job, I would hope that it would give me the same recommendations as an equally qualified male user searching for a job.

But that hasn’t always happened.

In 2018 it was discovered that a job recommendation platform was recommending more higher paying jobs to men (titles like “CEO” or “Executive”) and recommending lower paying jobs to women (titles like “Secretary” or “Assistant”) [read the story].

In this kind of system, the job recommender ideally wants to treat all users fairly and to give them content that is personalized to them, while also giving them the content they deserve to see.

I would hope that LinkedIn’s recommendation algorithm would treat me fairly as a female user. This is consumer fairness.

How is “fairness” coded into recommendation algorithms?

At this point, I have introduced both provider and consumer fairness. There are other types of fairness too, but I won’t introduce those in this article.

All of these “ethical objectives” can be implemented in many different ways into recommendation algorithms. There are dozens of algorithms that can be used to try to optimize for various definitions of fairness. Sometimes these implementations come into conflict with one another (e.g., when optimizing for provider fairness makes consumer fairness worse, or when optimizing for consumer fairness makes the accuracy of the entire system worse, etc.).

There is no perfect way to implement fairness into an AI/ML algorithm. There is no perfect implementation of fairness in recommender systems that will truly treat everyone fairly. Don’t forget: “what is fair for some may be unfair for others.”

If a system claims that it is treating everyone fairly, it can mislead users into a false sense of trust. On the flip side, if a system is transparent about how it is trying to be fair, and if it explains how its algorithm works to its various stakeholders, then it gives greater agency to the users to come to their own educated conclusions about the fairness of the system.

In our research, we explored what transparency could look like in fairness-aware recommender systems. We asked real users of real systems to share their experiences, their opinions, and their needs on these kinds of platforms.

This is what we discovered.

Fairness and Transparency in Recommendation: The Users’ Perspective

Methods

In this exploratory work, we interviewed 30 people who had interacted with a recommender system online. We began by asking them to explain to us how they thought that recommender systems work. Then, we asked them how they define “fair” treatment on these kinds of platforms, and how they feel about provider fairness versus consumer fairness.

We introduced them to a platform called Kiva, a microlending platform that seeks to raise financial inclusion globally. On this platform, borrowers who need money can get their loans funded through crowdsourcing, where multiple lenders can contribute to one single loan. Kiva has recently begun to play with adding a recommender system on their platform, where potential lenders can get recommended specific loans that are more relevant to them. On Kiva’s recommendation platform, the “recommended items” are actually people’s loans that they are hoping to get funded. It looks like this:

Screenshot of several recommended loans on the Kiva.org website

For this recommender system, someone who needs money for a loan (a borrower) is the provider, and someone who lends money (a lender) is the consumer of recommendations.

After introducing our interview participants to Kiva, we asked them how they thought that a recommender system could treat providers more fairly. For example, should Kiva recommend loans from countries that are consistently underfunded over countries that are consistently funded? Or should Kiva recommend loans from people who need money for food and water over loans from people who need money for art projects?

Finally, we asked users to describe to us how they would like to be educated about these fairness-aware recommendation algorithms, regardless of which fairness implementation was chosen.

Results

After asking participants to explain to us how they thought recommender systems worked, we received mixed responses.

While some users understood recommender systems fairly well, others thought of them as “black-box” systems that didn’t make any sense to them. Additionally, many participants shared with us that they had never thought of provider fairness, but after they had been introduced to the topic, they expressed concern for the providers of these systems.

We quickly learned that many participants were uncomfortable with the idea of fair recommendations entirely. Many participants had different definitions of what it meant for themselves or others to be treated fairly on a platform. Because of this, many interviewees desired greater transparency on these platforms.

Overall, users expressed that when a recommender system is fairness-aware, they want to be informed about how fairness constraints change their recommendation list and why this change is occurring. This is what led us to pursue explanations as a means to educate users about these things.

Explanations as Education

After we showed the Kiva loan recommendation example to our participants, we asked them how they would like recommendations on Kiva to be explained to them in a way that made them trust the organization and the algorithm better. Here were our three biggest takeaways:

  1. First, explanations should define the system’s fairness objective for users. This was based off of the majority of our participants indicating that if a recommender system uses fairness as an objective rather than just accuracy — they would like to be aware of this.
  2. Second, explanations should not nudge/manipulate users into making a decision, even if the goal is fairness. This was based off of many participants expressing fear that organizations can choose any fairness metric to optimize for, and what is fair to some people might be considered unfair for others.
  3. Finally, explanations should disclose the motivation for using fairness as a system objective. This was in line with our goal to use explanations to educate users so that they understand what the fairness goals are of the organization that is hosting the recommender system, and why this organization prioritizes these goals.

Conclusion

As recommender systems become more ubiquitous in online spaces, their impact on various stakeholders can no longer be ignored. Ethical objectives like fairness are promising improvements of these algorithms, but only if they genuinely improve the experience of all users of a system — including both consumers and providers.

In this work, we sought to explore potential directions for various ways to increase user’s trust and agency on a platform that seeks to be more fair to its users. We plan to conduct similar work in the future with other types of stakeholders of these systems, including providers and employees of organizations that use recommendations in their platforms.

We hope that this work sparks conversations about ways that we can make AI/ML better. We also hope that future work continues to explore best practices for transparency in these systems, so that we can better understand how algorithms impact us as users.

Read the full peer-reviewed paper here

Nasim Sonboli and Jessie J. Smith, Florencia Cabral Berenfus, Robin Burke, and Casey Fiesler. 2021. Fairness and Transparency in Recommendation: The Users’ Perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP ‘21). Association for Computing Machinery, New York, NY, USA, 274–279. DOI:https://doi.org/10.1145/3450613.3456835

--

--

Jessie J. Smith
CUInfoScience

PhD Student, Researching and Creating Technical Solutions to Ethical Problems in Society. Talking about AI Ethics at radicalai.org