How Filter Bubbles Are Biasing Your Opinions on Social Media

Filter Bubbles and Echo Chambers: How Social Media is Biasing Our Opinions and the Role of AI in it

Chinmay Bhalerao
Data And Beyond
6 min readJul 8, 2023

--

Source: NBC news

“Artificial intelligence has the power to reinforce our existing beliefs by creating personalized echo chambers, trapping us in filter bubbles that reinforce bias.” — Unknown

I’ve been using social media for over eight years now, and I’ve seen how it can be a powerful tool for connecting with friends and family, sharing information, and expressing myself. But I’ve also seen how it can be used to create filter bubbles and echo chambers that can bias our opinions and beliefs.

What are filter bubbles?

A filter bubble is a phenomenon that occurs when algorithms on social media platforms recommend content that is similar to what we have already liked or shared. This can lead to us being exposed to only one side of an issue, which can make it difficult to form an informed opinion.

For example, if I’m a liberal, I might be more likely to see news articles and social media posts that support liberal viewpoints. This can make it seem like there is no other side to the issue, which can lead to me becoming more polarized in my views.

What are echo chambers?

An echo chamber is a situation in which people are only exposed to information that confirms their existing beliefs. This can happen naturally, as we tend to gravitate toward people who share our same beliefs. However, it can also be facilitated by AI algorithms that recommend content that is similar to what we have already seen.

For example, if I’m a member of a group on Facebook that is dedicated to a particular political ideology, I’m more likely to see posts from other members of the group that support that ideology. This can make it seem like everyone agrees with me, which can make it difficult to consider other viewpoints.

How do AI algorithms contribute to filter bubbles and echo chambers?

AI-driven personalization can inadvertently contribute to the formation of filter bubbles. As users are exposed to content that aligns with their existing beliefs and interests, they may become isolated from differing perspectives and alternative viewpoints. The algorithms prioritize showing users content that is most likely to keep them engaged, leading to a reinforcement of their existing opinions and preferences.

Photo by dole777 on Unsplash

Recommendation systems powered by AI are a key contributor to filter bubbles. These systems utilize collaborative filtering and content-based filtering techniques to suggest content to users. By analyzing their past interactions and similarities with other users, recommendation systems promote content that is similar to what users have already engaged with. This can create a feedback loop where users are continuously exposed to content that aligns with their existing worldview, reinforcing their beliefs and limiting exposure to diverse opinions. [ Similar as you watch one video on youtube and you find more similar videos when you open youtube next time]

Another aspect where AI contributes to filter bubbles is through targeted advertising. AI algorithms analyze user data to deliver advertisements based on user’s interests, demographics, and online behavior. While this level of customization enhances the effectiveness of advertising campaigns, it further reinforces users’ existing preferences and beliefs. Ads are tailored to align with users’ interests, potentially narrowing their exposure to diverse viewpoints and alternative products or services.

Furthermore, sentiment analysis and opinion mining, enabled by AI techniques such as natural language processing, contribute to filter bubbles. AI algorithms extract sentiments and opinions from user-generated content, allowing platforms to personalize recommendations based on user’s preferences. However, this reliance on user sentiment can perpetuate filter bubbles by reinforcing existing beliefs and opinions rather than challenging them.

The probability of a user seeing a piece of content:

P(content) = f(user, content)

where f is a function that takes the user's interests and the content's characteristics as input and outputs the probability that the user will see the content.

The probability of a user becoming more polarized:

P(polarization) = g(filter_bubble_size, exposure_to_different_views)

where g is a function that takes the size of the user's filter bubble and the amount of exposure the user has to different views as input and outputs the probability that the user will become more polarized.

The total amount of polarization in a society:

P(polarization) = \sum_{i=1}^N P(polarization|user_i)

where N is the number of users in the society and P(polarization|user_i) is the probability that the user i will become more polarized.

It is important to note that these equations are only models. They do not perfectly capture the reality of filter bubbles. However, they can be used to help us understand how filter bubbles work and to develop strategies for mitigating their negative effects.

This can lead to filter bubbles because it means that users are less likely to be exposed to content that challenges their existing beliefs. This can make it difficult for users to form informed opinions and can lead to polarization.

Photo by Josh Withers on Unsplash

How can we avoid filter bubbles and echo chambers?

There are a few things that we can do to avoid filter bubbles and echo chambers:

Be aware of the filter bubble effect. Pay attention to the content that is being recommended to you and try to diversify your sources of information.

Be critical of the content you consume. Don’t just accept everything you see or hear as fact. Do your own research and try to get different perspectives on an issue.

Engage with people who have different viewpoints. This can help you to see different sides of an issue and to form a more informed opinion.

Final words

Filter bubbles and echo chambers are serious problems that can have a negative impact on our ability to form informed opinions. However, there are steps we can take to avoid them. By being aware of the filter bubble effect, being critical of the content we consume, and engaging with people who have different viewpoints, we can help to ensure that we are not being manipulated by AI algorithms.

I’m worried about the future of social media. I see how it’s being used to create filter bubbles and echo chambers, and I’m concerned about the impact this is having on our society. I think it’s important to be aware of this problem and to take steps to avoid it. We need to be critical of the content we consume and engage with people who have different viewpoints. Only then can we hope to break out of our filter bubbles and form informed opinions.

I’m not sure what the future holds for social media, but I hope that we can find a way to use it for good. I believe that social media has the potential to connect people and make the world a better place. But we need to be careful not to let it be used to manipulate us and divide us.

If you have found this article insightful

It is a proven fact that “Generosity makes you a happier person”; therefore, Give claps to the article if you liked it. If you found this article insightful, follow me on Linkedin and Medium. You can also subscribe to get notified when I publish articles. Let’s create a community! Thanks for your support!

You can click here to buy me coffee.

You can read my other blogs related to

Resources:

1] BETTER by today

2] GCFGlobal

3] UNDERSTANDING ECHO CHAMBERS AND FILTER BUBBLES

Signing off,

Chinmay!

--

--

Chinmay Bhalerao
Data And Beyond

AI-ML Researcher & Developer | 3 X Top writer in Artificial intelligence, Computer vision & Object detection | Mathematical Modelling & Simulations