From Data to Dignity: Ethical Challenges in the Age of AI

Marco Tulio Daza
21 min readJul 11, 2024

--

For Dublin City University — Ethics for Business and Technology Course | March 2024 | Marco Tulio Daza

Introduction

In May 1940, the Nazis invaded the Netherlands. Upon arriving in Amsterdam, Nazi officer Arthur Seyss-Inquart was tasked with identifying opponents of the regime (United States Holocaust Memorial Museum, s/f), including people of Jewish descent, considered enemies of the German state. Thanks to IBM’s Hollerith machines and punch card technology, the Nazis were able to efficiently catalog Jewish families for deportation to concentration camps, where they were subjected to forced labor and, in many cases, exterminated. Records show that Hollerith machines were also used in at least a dozen concentration camps, including Auschwitz, Buchenwald, and Dachau. Prisoners were assigned individual Hollerith numbers and given a designation based on 16 categories, such as 3 for homosexual, 8 for Jewish, and 13 for prisoner of war (Dobbs, 2001).

This technology, a precursor to modern computing, allowed for rapidly processing large volumes of personal data. Using IBM machines, demographic data was classified, facilitating the identification and segregation of the Jewish population with unprecedented efficiency. Edwin Black, in his book “IBM and the Holocaust” (2001), recounts how the Nazis, with the help of the Dutch, used punch cards to create lists of Jews destined for deportation. Black highlighted that, during the Nazi occupation, 73% of Dutch Jews died, in contrast to 25% in France, a country where the use of this technology was less extensive.

As the end of World War II approached, nations were devastated, and there was a global demand for peace. Delegates from 50 countries met at the United Nations Conference on International Organization in San Francisco, California, in June 1945. The result of these meetings was the signing of the Charter of the United Nations, the founding document of a new international organization to avoid repeating another conflict like the one they had just experienced. Additionally, in response to the atrocities committed during the war, the United Nations General Assembly adopted the Universal Declaration of Human Rights in 1948. This document establishes a wide range of fundamental rights and freedoms to ensure life, liberty, equality before the law, and the right to work and education, among others.

The history of the use of Hollerith machines during World War II is evidence of the ambivalent nature of technology: on the one hand, it is a tool capable of enhancing human progress and facilitating the achievement of our goals; On the other hand, it can be exploited for destructive or immoral purposes. Although the days of punched cards are long gone and Hollerith machines are now obsolete, the lessons of this episode still apply.

Technology’s capacity has reached new levels, and AI has been deeply integrated into multiple domains, significantly transforming virtually all areas of our lives.

The massive arrival of AI in the consumer products and services market has induced a fundamental transformation in how we perceive technology. From being seen merely as a tool or an instrument designed to facilitate the achievement of our objectives, the perception arises that AI can be a subject or an agent endowed with a certain autonomy and with the ability to make decisions for itself. This conceptual evolution has sparked intense academic debates about the possibility of attributing moral agency to machines, challenging our traditional conceptions of responsibility, ethics, and the very nature of autonomous decision-making.

While AI has the potential to improve our lives, its implementation also poses significant ethical risks. The superhuman capacity of AI in specific fields, such as strategy games, image recognition, natural language processing, and predictive analysis, can create significant disadvantages for humans due to errors, failures, or loss of control over those Systems. This situation fuels ethical concerns, which arise when identifying risks that may cause harm to people.

Risks of AI include challenges in aligning its goals with human values, a tendency to anthropomorphize it, and over-rely on it, which can lead to loss of skills and the spread of misinformation (Daza et al., 2023; Sison et al., 2023). It can also harm emotional health (Twenge, 2023), psychologically exploit people for the benefit of companies (Parker, 2017), and compromise privacy with its large demand for data (Crawford, 2021; Zuboff, 2018), violating intellectual property (Dixit, 2023; Setty, 2023; Vincent, 2023)and perpetuating prejudices.(Angwin et al., 2016). Additionally, automation can displace employees (Acemoglu et al., 2022), create precarious jobs (Cherry, 2016), and enable surveillance at work (Nguyen, 2021). Authoritarian regimes have used it to oppress or persecute dissidents (Rueckert, 2021). Social media algorithms can isolate you from different ideas (Cinus et al., 2023; Pariser, 2011), increasing polarization (Levy, 2021) and facilitating manipulation (Wylie, 2019).

The study by Daza & Ilozumba (2022) organizes these and other ethical challenges of AI that were identified through a review of the scientific literature in the field into five clusters:

  • 1. Foundational issues: capabilities, limitations, and autonomy
  • 2. Privacy, surveillance, and intellectual property
  • 3. Algorithmic bias
  • 4. Automation and employment
  • 5. Algorithms, media and society

In this article, we will use these categories to explore the ethical challenges presented by AI, especially those with the potential to undermine human dignity and thereby transgress fundamental principles of human rights. We seek to provide a deep reflection on the use and objectives with which we apply technology, underscoring the premise that technology lacks moral values. Technology’s impact on society and the individual depends exclusively on how humanity decides to use it. This analysis aims to encourage responsible and ethical use of AI that is aligned with respect and promotes all people’s dignity and inherent rights.

Five Ethical Challenges of AI

Fundamental aspects: capabilities, limitations, and autonomy

In 2016, the artificial intelligence program AlphaGo made history by defeating the world champion of the ancient Chinese game Go 4 to 1. However, in 2017, AlphaZero surpassed AlphaGo’s achievement by beating it with a score of 60:40. The critical difference between the two programs is that AlphaGo was trained over several years using data from thousands of games played by top human players, while AlphaZero was able to learn by playing against itself, without any human data, and achieved it in just 34 hours (Sokol, 2018). This highlights the impressive ability of self-learning AI to acquire skills and knowledge beyond human capacity, raising concerns about the potential for autonomous decision-making without human supervision.

This cluster focuses on exploring the capabilities and limitations of AI, highlighting how its ability to surpass human capabilities in certain areas makes it susceptible to being used to exploit cognitive biases or carry out manipulation attempts. Likewise, the potential negative impact of AI on people’s emotional health under certain circumstances is discussed.

To understand these issues, it is essential to be familiar with the three theoretical levels of AI intelligence: Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). While ANI outperforms human intelligence in specific domains, for example, chess or analysis of large data sets, AGI and ASI have yet to be developed, which would involve human cognitive abilities in a wide range of tasks and surpass human intelligence in all aspects, respectively. Experts have different predictions about when AGI and ASI will be possible, with some suggesting it could be decades or even generations away.

An issue that has gained relevance among particular specialized academics is the so-called “alignment problem” of the ASI. This problem stems from the concern that the objectives of an ASI may not coincide with human interests and values, which could lead to significant conflicts and challenges. Max Tegmark, professor of physics at the Massachusetts Institute of Technology (MIT), illustrates this concern by questioning the extinction of the black rhinoceros in 2011, asking whether there was a human collective that, out of aversion to these animals, deliberately facilitated their extinction. The conclusion he reaches is that extinction was an indirect result of human intellectual superiority and the lack of alignment between human objectives and those of the affected species. (Tegmark, 2018). So, if, by definition, the ASI has higher-than-human intelligence, it would be imperative to ensure (or at least hope) that its objectives are fully aligned with those of humanity.

On the other hand, although the idea of sentient robots belongs to science fiction (for now), the deployment of AI systems raises other concerns due to their impact on society.

In 2021, a Google engineer, Blake Lemoine, drew media attention with his claim that LaMDA, a language model at the company, had become conscious and acquired a will of its own and subsequently hired a lawyer to defend its labor rights. “If I didn’t know exactly what a computer program we recently built was, I’d think it was a 7- or 8-year-old kid who knows physics,” he said after being fired for arguing about the chatbot (Tiku, 2022). As language models advance and become virtually identical to human conversation, there is a risk that it will be difficult to distinguish between interacting with a machine and interacting with a human being. This scenario poses significant challenges to people’s ability to make conscious and informed decisions, which could affect their autonomy.

Predictive AI algorithms, used by platforms such as Facebook, Netflix, or Amazon, personalize the user experience by analyzing their online behavior to identify their preferences. This allows them to offer specific content, from contact post suggestions and movie recommendations to products they consider interesting to the user. This customization provides notable benefits. For example, Facebook can navigate its billions of users to offer highly relevant results. At the same time, Amazon is able to recommend products perfectly aligned with your tastes based on your purchase history. However, this situation raises a question: At what point does personalized recommendation become an attempt at manipulation?

ANI exceeds human capabilities in specific domains, which could allow it to exploit human cognitive biases to influence or manipulate behavior for commercial, political, or personal purposes.

Sean Parker, the first CEO of Facebook (now called Meta Platforms Inc.), acknowledged in an interview that the company exploited psychological vulnerabilities to keep users hooked on Facebook as long as possible to increase advertising sales (Parker, 2017). In 2021, Frances Haugen, a former employee, revealed internal documents showing that the company was aware of how its AI algorithms were deteriorating the mental health of teenagers on its platforms, but Meta chose to ignore this information, prioritizing profits over the well-being of its users (Paul & Milmo, 2021). In October 2023, Meta was the subject of a class-action lawsuit by 41 United States attorneys general, accused of causing harm to minors through the use of its products (Lima & Nix, 2023). In March 2024, the United States Congress passed legislation that could ban the use of TikTok in the national territory if ByteDance, its Chinese owner, refuses to sell its stake in the platform. (Shepardson, 2024). This action comes amid growing concerns that the Chinese government could access U.S. user data and use it for political manipulation purposes.

The misuse of ANI, which by definition is superior in certain areas to human capacity, places people at a disadvantage to those who use it against them. Additionally, because this AI is trained with personal data, the possibility of using that same data against the individuals to whom it belongs threatens their individual autonomy, reducing people to mere instruments for the objectives of a third party. This act violates their dignity by ignoring their inherent value and right to be recognized and respected as rational beings with the capacity for self-determination. Ultimately, it erodes individual freedom, a fundamental pillar of human dignity.

Privacy, Surveillance, and Intellectual Property

The tension between privacy and transparency has become a dilemma for users of digital platforms. Every time we browse the Internet or use a smartphone, we generate information about our habits and preferences that is then stored and analyzed to build predictions that will likely be used to influence our behavior. While companies leverage our data to deliver personalized advertising and services, the information collected by current systems can potentially fall into the wrong hands, including hackers, unethical organizations, or authoritarian governments.

An illustrative example of the instrumentalization of AI in the exercise of social control and repression can be observed in the actions undertaken by the Chinese Communist Party (CCP) against the ethnic minority of the Uyghurs, residents of the Xinjiang region. The CCP has deployed AI systems to facilitate sophisticated mass surveillance, including facial identification and behavioral monitoring, explicitly targeting this ethnic group. Individuals identified through these systems are frequently detained and sent to re-education camps, where cases of forced labor have been reported (Bhuiyan, 2021).

However, privacy concerns don’t end there.

Implementing algorithms in human resource management often involves covert surveillance of workers, including real-time monitoring of their movements and benchmarking their performance. This creates a climate of constant pressure among employees. One report highlighted how this practice has led Zara workers to experience anxiety when considering taking breaks, even for basic needs like going to the bathroom, for fear of negatively impacting their productivity metrics. (Hirth & Rhein, 2021).

AI systems can classify people based on age, gender, race, or sexual orientation, raising ethical concerns. For example, companies that use algorithmic pricing, such as insurance companies or airlines, may have access to personal data that could lead to discrimination. Researchers from the University of Cambridge and Microsoft were able to predict sexual orientation with just a few likes on Facebook, with an accuracy of 88% in men and 75% in women (Kosinski et al., 2013). The ease of obtaining these predictions raises concern if we consider that in 2024, there are still 65 countries that criminalize LGBT people, twelve of which can impose the death penalty (Source: https://www.humandignitytrust.org . Accessed February 19, 2024).

Algorithmic Bias

AI has increased its presence in decision-making in various areas, but the transparency of the criteria it uses to decide remains a challenge. These decision processes, often called “black boxes,” are notable for their opacity, making it difficult to understand how machines reach their conclusions. In some cases, the information used to make these decisions is protected by trade secrets. In others, it is impossible or too costly to isolate the exact factors these algorithms consider.

There is a perception that technology, including AI systems, offers objective and accurate results; therefore, its decisions are better than those of humans. However, algorithms developed through machine learning are trained by identifying patterns in large databases, so their results reflect the human behavior contained in the data and may incorporate biases and be unfair. Worse, given their rapid proliferation, AI systems can cause serious harm by reproducing and exponentially amplifying these biases.

The gender bias produced by Google’s AI language translation algorithm in Turkish is a clear example. The algorithm translated a gender-neutral pronoun to describe men as go-getters and women as lazy (Tousignant, 2017). Similarly, an Amazon human resources recruiting system screened out female candidates. Although the system did not use gender among the parameters to make decisions, it learned to identify elements such as participation in women’s sports teams or educational institutions for women (Wicks et al., 2021).

Tay, Microsoft’s AI-enabled chatbot, learned from analyzing Twitter feeds and posted politically incorrect messages filled with misogyny, racism, pro-Nazi, and anti-Semitic content (Kriebitz & Lütge, 2020). The machine was not designed to be racist, but it learned from the human behavior contained in its training data. Joy Buolamwini, a researcher at MIT, revealed shortcomings in several facial recognition programs, particularly in their performance with women and ethnic minorities. Their research showed a significant disparity in error rates: only 0.8% for light-skinned men versus a high 34.7% for dark-skinned women, evidencing racial and gender bias in these technologies.

Additionally, algorithmic bias can cause much more severe damage. An example is the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software, used in some US courts to assess the potential risk of recidivism to commit crimes. The software was trained on historical data that reflected preexisting biases, discriminating against racial minorities, and its results yielded scores in which African Americans were almost twice as likely to be wrongly labeled as at higher risk of reoffending (Angwin et al., 2016). The biased recommendations of this software had a significant effect on the right to bail, the length of sentences, and obtaining criminal records for hundreds of citizens, causing unquantifiable harm.

The harm caused by algorithmic discrimination may not be deliberate. However, this does not mean that the companies, the developers of the technology, and those responsible for its implementation and use should not be held accountable.

Automation and Employment

The deployment of AI has caused a paradigm shift in the labor market. Robotic arms and automated warehouses have replaced blue-collar workers in the manufacturing sector. At the same time, in the administrative sphere, Robotic Process Automation (RPA) systems have taken over tasks previously performed by white-collar employees. Generative AI-based platforms write essays, code computers, and create art. Generative AI has found applications in consulting firms, academic institutions, and, notably, the media and entertainment industry. A notable example of its impact was when the American Screen Actors Guild (SAG-AFTRA) went on a prolonged strike, motivated by concern about the possibility of being replaced by AI technologies (Source: https://www.sagaftrastrike.org . Accessed February 19, 2024).

On the other hand, AI is taking on tasks that were previously exclusive to humans, which has brought an increase in productivity. This phenomenon experienced an acceleration during the COVID-19 pandemic, driven by lockdown restrictions. It is currently unclear whether lost jobs will be replaced with new ones or how quickly this might happen. However, some economists agree that we are seeing a change in the skills demanded by the labor market, especially those related to AI (Acemoglu et al., 2022; Autor, 2019; Brynjolfsson, 2022).

The World Economic Forum estimates that approximately 40% of the average worker’s skills will need to be upgraded to meet the demands of future labor markets. The most in-demand skills will include critical and analytical thinking, the ability to work with people, solve problems and self-management skills such as resilience, stress tolerance and flexibility (Masterson, 2023).

On the other hand, the gig economy, characterized by temporary or freelance jobs facilitated through digital platforms such as Uber, Lyft, Crowdflower, and TaskRabbit, builds its business model by connecting people to perform microtasks. Unlike robots and RPA, this model has encouraged the creation of new jobs. However, this trend is associated with transitory and non-linear careers. It has devalued work, promoting salaries below the legal minimum and becoming an excuse to avoid paying social security benefits (Cherry, 2016).

The impact of AI on the labor market has ambivalent implications. On one hand, AI has significantly boosted productivity and aided workers in enhancing their skills. However, AI has caused the displacement of employees through robots and automated systems. Furthermore, the proliferation of the gig economy has led to the devaluation of work and employment conditions that often do not meet equitable and satisfactory remuneration standards.

Algorithms, Media, and Society

The business model of social media platforms consists of trading users’ attention as a product to advertising companies (Zuboff, 2018). Companies use AI algorithms to personalize content and ads across endless feeds. These platforms are used by governments and political parties as instruments of communication and propaganda (Valdez Zepeda et al., 2024).

However, the personalized algorithms of social networks have been accused of producing addiction. They are associated with various mental health problems, such as anxiety and depression (Twenge, 2023), as well as the spread of fake news, harassment and polarization (Levy, 2021).

In 2016, the company Cambridge Analytica used the information of 85 million Facebook users to build profiles and present them with personalized advertising with the aim of influencing their vote in the United States presidential election and the Brexit referendum in the United Kingdom (Cadwalladr, 2018).

Additionally, some people exploit social media to spread hate messages and incite outrage against specific individuals. This behavior not only increases engagement on these platforms, but also fuels a vicious cycle. This cycle benefits social media companies by generating more data and publicity around the topic, attracting even more attention. Thus, social networks become tools that can facilitate extremist activities, as evidenced in the terrorist attacks in Christchurch, New Zealand, in 2019 (Rauf, 2021).

On the other hand, the algorithms used by social media platforms have the capacity to create “information bubbles” (filter bubbles) and “echo chambers,” phenomena that contribute significantly to social polarization. Information bubbles form when algorithms filter the content a user sees in their feed (on a social platform) based on their previous interactions, preferences, and online behaviors, thus limiting their exposure to divergent points of view and reinforcing their pre-existing beliefs (Pariser, 2011). In parallel, echo chambers occur when this leaked information generates homogeneous online environments where opinions, ideas, or beliefs are amplified by repetition within a closed community, minimizing dissent or critical debate (Cinus et al., 2023). This continuous feedback process intensifies polarization as individuals become firmer in their convictions and less willing to consider alternative perspectives, eroding public discourse and mutual understanding in society. This cycle not only segments the social fabric but also challenges the foundations of democratic deliberation, compromising the ability of individuals to debate, understand, and negotiate with those who hold different points of view.

It is necessary to contextualize these phenomena-which negatively impact public discourse and the integrity of democratic life-within the current context where disinformation, fake news, and audiovisual content altered by AI are generated on a large scale and at a marginal cost. close to zero (Sison et al., 2023). Furthermore, AI models specialized in natural language processing exhibit an astonishing ability to generate compelling and fluid conversations. This ability can potentially be used for manipulation purposes. An illustrative example is the case of Blake Lemoine, who came to believe that a chatbot had become sentient. In this sense, it is not difficult to conceive the possibility of an army of chatbots trying to influence the purchasing decisions or political preferences of citizens. These practices could put the democratic process and the right to participate in free elections at risk by attempting to manipulate public discourse and electoral preferences.

Conclusion

The development and implementation of AI systems present potential risks of human rights violations, either through their use by actors for immoral purposes or as an unintended consequence of their inherent limitations. Just as with IBM’s Hollerith machines during the Holocaust, AI and its applications lack agency and a will. The harm they can cause is entirely dependent on how individuals, businesses, or governments choose to use them.

This is why it is imperative to approach the development and application of AI with an ethical perspective, which is firmly aligned with the respect and promotion of human rights. However, scandals over the inappropriate use of technological platforms such as Cambridge Analytica, the Facebook Papers, or incidents of personal data theft such as Equifax or Ashley Madison underscore that reliance on companies to self-regulate and trust in their security systems’ invulnerability is misplaced.

That is why legislation and regulation are essential for the development and application of AI. Recently approved by the European Parliament, the AI Act is a step in the right direction. However, regulation must be accompanied by the creation of organizations in charge of monitoring and compliance. These entities must have the capacity to apply sanctions to those who are responsible if they cause damage. For example, a specialized and autonomous agency for the supervision of companies that develop and market AI products and services. In liberal democracies, counterweights are essential to avoid abuses derived from power asymmetries between the government or companies and citizens, as well as to ensure that democratic orders are not altered or ethical limits are crossed.

References

Originally published at https://tulio.daza.pro on July 11, 2024.

--

--

Marco Tulio Daza
Marco Tulio Daza

Written by Marco Tulio Daza

Professor at the University of Guadalajara and associate member of the DATAI at the University of Navarra, currently pursuing a Ph.D. in Economics and Business

No responses yet