Ethical Issues on AI-powered Social Media Apps

In developing intelligent computing, issues of ethics are central. How can the developer of intelligent systems be aware of and reason about these issues in the development processes? How can the user be aware of these issues and how to deal with potential breaches?

Tulio Carreira
10 min readSep 20, 2021
Old statue of young woman with smartphone in museum

Introduction: The rise of AI and the need for ethical guidelines

Artificial Intelligence is broadly perceived as any kind of computational system that shows intelligent behavior via a set of actions that aim at reaching pre-defined goals, according to Müller (2021). AI may involve several modern computational techniques; it is a powerful force that is already reshaping living beings’ lives, interactions, and environments (Floridi et al., 2018), and its participation in society has increased significantly over time.

AI techniques are the starting point of modern medical diagnosis, movie recommendation systems, autonomous vehicles, image recognition applications, and so on, but they have also been embedded in existing technologies, such as social media. Artificial Intelligence has been making human life easier and more convenient in a way that we have become used to it and take it for granted (Tai, 2020). However, these revolutionary applications have both positive and negative outcomes. In addition, Tai (2020) remarks that even though AI has empowered businesses greatly over time, it has been proven to be harmful to the world if no guidelines are imposed.

As humans, we use ethical principles to systematize, defend, and recommend concepts of right and wrong behavior (Fieser, 2021) for regulating how we interact with each other and with the environment we live in. To make sure AI systems are in conformity with human principles, they need regulation via embedded moral rules. Discussions around Ethical AI have urged the practice of using AI systems with good intention to empower employees and businesses while fairly impacting customers and society (Eitel-Porter, 2020).

The rise of Ethical AI is a means of battling several ethical challenges that were brought by AI systems. Recently, we’ve witnessed the 2016 US Presidential elections being influenced by the spread of political propaganda on Facebook (Walch, 2021), the propagation of the flat Earth ideology on YouTube (Landrum et al., 2021), as well as the use of misinformation as a political weapon on WhatsApp during the COVID-19 pandemic (Ricard and Medeiros, 2020). Even though political propaganda, conspiracy theory, and negationism are reasonably different topics, and the examples above were hosted on different social media applications, these situations share common ground on the spread of misinformation, which is often powered by AI.

The ethical issues aggravated by AI’s participation in social media

Kaplan (2020) reminds us that, at its very beginning, social media was seen as an opportunity to experience democracy in a more participatory manner. With the advent of AI and big data a decade later, social media increasingly went from being a democracy facilitator to a major threat of the same. Belk (2020) has listed out several of the current ethical AI challenges; space precludes expanding on all of them, therefore here I expand ethical issues specifically related to AI-powered social media:

  • Privacy and Surveillance: most social media applications collect personally identifiable data under the allegation that these are needed for the AI to learn how to personalize users’ experience on platforms. However, the data collection granularity is not explicit and users are manipulated into leaving more data than ever before (Belk, 2020). As highlighted by Müller (2021), surveillance is the business model of the Internet. In addition, as our lives become ever more digital, there are more sensor technologies to learn about our non-digital lives, which goes against “the right to be let alone” and the right to secrecy (Müller, 2021).
  • Manipulation of Behaviour: the personal data collected via the aforementioned “surveillance capitalism business model” is oftentimes used against users themselves. The digital activity provides deep knowledge about personal preferences and traits; this, in turn, makes users easy targets not only for advertisement, but also baseless political and scientific opinions, conspiracy theories, and misinformation. Fake news has been used in order to manipulate entire groups of people (Kaplan, 2020), and can be conveniently fabricated with modern technology. There are AI techniques that can generate entire pieces of text (Wakefield, 2019), and powerful machine learning algorithms have also been used for manipulating audiovisual content. Deepfake, for instance, is a synthetic image or video in which the original content is replaced with something similar. Audio can be manipulated as well in order to create “voice skins” or “voice clones” of public figures (Sample, 2020). Fake news and deepfake content are the fuel for nocive bots. These agents are effective in spreading heavily altered facts, amplifying messages, and manipulating public opinion due to their ability to generate large amounts of content in a short period of time (Walch, 2021). The manipulative side of social media harms not only the autonomy of individuals (Müller, 2021) but may also undermine critical thinking since humans often prioritize information that is already aligned with pre-existing views and values (Landrum et al., 2021). In other words, there’s a risk of users becoming susceptible to information bubbles and developing highly polarised views if rarely exposed to differing opinions.
  • The opacity of AI systems: in the context of social media, there’s a lack of community engagement and auditing towards algorithmic decisions. It is virtually impossible for users to know how an application came to a conclusion (Müller, 2021) because algorithms are not always transparent, in the sense that the reasoning behind recommendations is not clear to end-users. On YouTube, for instance, users can opt-out of receiving specific video recommendations, but they don’t have the option to specify why. Besides, users cannot choose to receive nor to stop receiving recommendations on specific topics, since videos are suggested mostly based on activity history (Google, 2021).
  • Bias in decision systems: bias typically surfaces when the individual making the judgment is influenced by an irrelevant characteristic that is usually a discriminatory misconception about members of a group, leading to unfair judgments, as stated by Müller (2021). Algorithms don’t have the ability to differentiate stereotypes from bias; such things are embedded by humans, and many algorithms simply replicate and amplify human biases, which affect mostly minority groups. Recently, Facebook and Instagram have been criticized for “shadow-banning” black users. This means that the algorithms have shown a tendency to limit the places where black people’s content appears on both platforms, without users realizing it (Anon, 2020).

How developers can counteract AI ethical issues on social media

As mentioned before, AI can be harmful to society if no ethical rules are imposed. The companies behind social media applications have been attempting to be more ethical technologically, which is, unfortunately, still not enough; money, power, and political influence still motivate different groups and companies to continue on creating computer-driven means of human manipulation and control (Burkhardt, 2017). Therefore, the AI Ethics framework needs to imply that the developers will be penalized if they choose to not comply with regulations.

In regards to increasing privacy and mitigating surveillance, the General Data Protection Regulation (GDPR) is an example of a set of directives that aims at granting citizens greater control of their own data, helping businesses build more trusting relationships with their customers and the public in general (Fimin, 2018). From the start, it should be made clear what kind of data a company will collect on its users, and for what purposes.

As for the spread of misinformation and manipulation of behavior, auditing is a way to restrain nocive content. YouTube, for instance, has banned roughly 200,000 misleading videos on COVID-19 from the platform (Criddle, 2020). A promising and automated approach to fight against fake news is to use natural language processing for automatically classifying pieces of text as faulty or not, as well as relying on techniques such as feature extraction, social context modeling, and sentiment analysis (Mesquita et al., 2020). Google and Facebook have adopted approaches to detect and flag fake news, but there is still no proactive means of eradicating misinformation altogether (Burkhardt, 2017). Even though dealing with fake news is a valuable effort, applying fact-checking techniques in the exact moment when a user attempts to upload content to the internet could be a way of nipping fake news and deepfake content in the bud. Furthermore, imposing users to acknowledge the veracity of the content they’re sharing could potentially deter them from spreading inaccurate information.

The formation of social media bubbles and the consequent political polarisation is harmful setbacks of how social media algorithms work. YouTube, for instance, has unintentionally pushed users towards alt-right video content merely in an attempt to keep users in a cycle of video-watching, since it is in the interest of the website that users keep consuming content they find engaging (Bryant, 2020). The platform tends to enforce individuals’ biases since it aids in selective exposure (Landrum et al., 2021). However, personalized video recommendations that challenge one’s beliefs instead of reassuring them could be a healthy means of bursting social media bubbles, as well as instigating debate and critical thinking in users. A study conducted by Melodie and Gruzd (2017) suggests fine-tuning the recommendation system algorithms to reduce the centrality of anti-vaccine videos. Popular videos on YouTube are usually captivating, unlike educational videos. Marketing and communication strategies could also help in creating more engaging truthful content in order to burst misinformation bubbles (Melodie and Gruzd, 2017).

In a similar fashion, the lack of transparency experienced on social media could be mitigated by involving users in important decisions from the beginning. It should be possible for users to find answers to questions they might have, e.g.: “Why was this video recommended to me?”, “Who viewed the posts I shared?”, “Do I still want them to see my content?”. Moreover, social media algorithms tend to reflect existing human biases, so it is necessary that such platforms make genuine efforts to counterbalance embedded human bias. To add fairness and avoid making potentially biased decisions, it is sensible to work with more diverse training datasets and a more diverse team of developers so that the algorithm does not reproduce the stereotypes we are trying to mitigate as we build a more inclusive society (Coded bias, 2020).

How end-users can protect themselves from AI ethical issues

Müller (2021) argues that we have lost ownership and control of our data. Even though there’s truth in that statement, there are several actions that end users can undertake in order to protect themselves online.

Firstly, it’s important to regularly check which applications are installed on the user’s devices and whether they collect personal data. If that’s the case, and such applications deliver value, it’s worth ensuring that only crucial information is being shared with these applications to minimize exposure. Companies under the European GDPR regulation are obligated to notify users whenever there has been a data breach — in that case, users should be aware that they have the right to ask their data to be deleted from platforms. Individuals can request data erasure and companies have about one month to delete the data (Anon, 2021).

Another way for users to detour manipulation of behavior is to purposely consume content that is not usually served to them; new content could be incorporated into their social media bubbles, in order to reduce bias. Besides, users should be skeptical of information received on social media and double-check content against reliable and reputed sources. The emotional tone of the audiovisual content, the number of ads, the argumentation consistency, and the use (or absence) of logical fallacies are some of the indicators that should be considered when assessing the veracity of content online (Mesquita, 2020). Last but not least, it is important that users speak up and expose companies publicly whenever they feel affected by algorithm biases and/or lack of transparency.

Conclusion

One of the biggest issues of AI-powered social media is the manipulation of behavior, which leads to the rise of political polarisation and the threat to democracy; fortunately, there are ways for both developers and end-users to mitigate such harm. To burst social media filter bubbles, platforms should counterbalance biased recommendations with alternative views, while users could purposely engage in such content to expand their worldview, becoming more savvy and skeptical in regards to whatever they are exposed to online.

References

--

--