Recommendations without Persuasion — Multi-objective optimization to avoid creating echo chambers in a race to get more engagements.

Shobhit Srivastava
Think Addicts
Published in
6 min readApr 29, 2023

If you believe Moon Landing was scripted, your YouTube homepage will likely have a video showing how NASA disguised Earth to look like the moon. If you are a vegan, you might see a video on the health benefits of plant-based proteins. A religious person every day reads or watches content on the healing power of faith, and a fan of Argentina will mostly find videos when Lionel Messi outperformed Cristiano Ronaldo.

The point is not whether your beliefs are right or wrong but that you mostly find content that reinforces your already held convictions, trapping you in what is rightly coined as an “Information Cocoon.”

Image Source: https://www.youtube.com/watch?v=pE5MhZ3_vYo&ab_channel=meitongyan

AI-based recommendation systems are everywhere, from streaming platforms to feed pages. You are here because some AI-based recommendation or ranking system has found this blog relevant to you. But with these systems come potential risks, such as perpetuating biases, creating echo chambers, and limiting exposure to diverse perspectives. Read on to learn about the challenges and opportunities presented by AI-based recommendation systems and how we can improve them for all users.

Are the recommendation systems all harmful? What about when you got your dream job because of a highly relevant LinkedIn suggestion or when YouTube shows a great motivational video that jumps you out of bed and helps get you going on your bad days? In this era of Web2.0, where anyone and everyone can create and share data, in a platform without a recommendation system, one may never find what they like, and your online experience will be no different than watching TV with a 7yo who randomly changes channels.

Image Credit — Young Sheldon, CBS

Moreover, simply debating about the pros and cons of the recommendation system may never help us understand what is actually happening and how we may design a recommendation system that is interesting and benign. What should be discussed is how these systems are designed and evaluated and is that done with wisdom? I once read a book by Nick Bostrom that mentioned if the performance of an AI system for healthcare is measured only and only on how well it enhances the average health of all living humans, it may consider killing some of us to reduce population and take the best care of a few. So, whenever making an AI, the performance measure should not just be mathematically sound and easy to converge on but also practical and ethical.

I agree that the above example is a bit extreme, so let us consider a real scenario that is happening now. An AI-based recommendation algorithm used in video-sharing apps whose performance is often measured on how long a user sticks to the screen. Soon, the algorithm will understand and start exploiting the confirmation bias in the human brain — which is to find what we already believe more appealing. The performance graphs will soar, and the app will monetize millions. While both users and the company will say that the app is doing great, what will remain unmeasured in this quest for engagement is the impression these uni-direction contents are creating in the users’ minds. And this is just one problem; any unthoughtfully designed recommendation systems may have various issues; here are some of the well-studied problems.

Image Source: knot9.com
  • Filter Bubble: AI-based recommendation systems can create a filter bubble where users only see content that confirms their beliefs and limit the diversity of information they are exposed to.
  • Lack of Transparency: Complex algorithms used in AI-based recommendation systems can be difficult to understand or interpret, leading to a lack of trust among users and making it challenging to identify and address biases in the system.
  • Bias and Discrimination: AI-based recommendation systems can perpetuate existing biases and discrimination in society, particularly if the data used to train the system reflects those biases.
  • Limited User Control: Users may have limited control over the content they see in AI-based recommendation systems, which can be frustrating for those who want more control over their recommendations.
  • Privacy Concerns: AI-based recommendation systems often collect and use personal data, raising concerns about privacy and data security.
  • Information Cocoon and Echo Chambers: AI-based recommendation systems can contribute to forming information cocoons and echo chambers, limiting exposure to new ideas and perspectives and potentially leading to the spread of misinformation and conspiracy theories.

If there are so many issues, how can we make recommendation systems that rise above these challenges and make us engage without doping us?

  1. Diversify Recommendations: Recommendation engines should aim to diversify recommendations to expose users to various content and viewpoints rather than solely suggesting content that aligns with the user’s past behavior or preferences. This can help reduce the effects of confirmation bias and prevent the formation of information cocoons.
  2. Avoid Personalization Based on Sensitive Information: Recommendation engines should avoid personalizing recommendations based on sensitive personal information such as race, gender, religion, political affiliation, or other attributes that could lead to discrimination.
  3. Provide Clear and Transparent Explanations: Developers should aim to make recommendation algorithms transparent and provide clear explanations to users about how recommendations are generated, including the factors considered and how the algorithm avoids biases.
  4. Allow Users to Control Recommendations: Recommendation engines should provide users with control over their recommendations by allowing them to adjust their preferences and providing options to provide feedback and filter recommendations.
  5. Consider the Source of Data: Developers should be mindful of the data sources used to train the recommendation engine and ensure that the data is diverse and unbiased. They should also review the data regularly to ensure that it continues to represent diverse perspectives.
  6. Regularly Monitor and Address Biases: Developers should regularly monitor recommendation engines to identify and address biases that may arise over time. This can include conducting regular audits and user testing to identify any patterns or biases that may be present and taking steps to address them.
Source: https://www.linkedin.com/pulse/how-avoid-leadership-echo-chamber-david-regler-frsa/?trk=public_profile_article_view

AI-based recommendation systems have the potential to be a powerful tool for connecting users with relevant and personalized content. However, we must be mindful of the potential risks and take steps to ensure that these systems are designed and developed with ethics and fairness in mind. By prioritizing diversity, inclusion, transparency, and accountability, we can create recommendation systems that serve all users and promote a more equitable and just society.

Tim Cook, CEO of Apple, once said, “Technology should be built on values that we all hold dear… inclusion, diversity, privacy, and security.” By prioritizing these values, we can create recommendation systems that not only provide personalized recommendations but also protect users’ privacy and security while promoting diversity and inclusion.

It is important to remember that the responsibility for improving AI-based recommendations does not solely rest on developers and designers. As users, we also play a vital role in shaping the recommendations we receive. If you find yourself surrounded by the same kind of content, it’s time to become conscious of your consumption habits and try to diversify your sources. Don’t hesitate to switch apps, avoid scrolling indefinitely, and if you come across offensive content, report it. It’s essential to understand that any AI model is only as good as the data and feedback it receives. Therefore, if we want better recommendations, we need to improve how we give feedback to AI. By doing so, we can work together to create a more inclusive, diverse, and equitable online experience for everyone.

If you truly liked this blog, some recommendation engines indeed did a good job. So thanks for reading, and have a Happy & Conscious Scrolling. :)

--

--

Shobhit Srivastava
Think Addicts

Machine Learning Engineer. Love thinking about new product ideas.