Chenyi Wang
Digital Society
Published in
7 min readMay 10, 2023

--

The negative impact of AI bias and discrimination on human life

Introduction:

Organizations that are operating at large scale often adhere to automated systems to create an impact on larger groups of people and acquire increased customer base. Although ethics-based decision-making is difficult to take, through the help of automated systems such as Artificial Intelligence and machine learning. According to Mökander et al., (2021), automated decision-making systems (ADMS) can increase efficiency and provide new solutions to complex problems, but these decisions can be coupled with ethical issues. Here, the discussion will focus on biases and discrimination that can happen through decisions taken with the help of Artificial Intelligence. Three different perspectives will be taken into consideration, namely, legal, social and ethical perspectives to understand the effectiveness of automated decision-making processes. Lastly, a short reflection to demonstrate the personal development acquired through course learning and researching for this study.

What are AI BIAS and Discrimination:

According to Ferrer et al., (2021), digital discrimination is presently a serious problem, as many industries are highly dependent on decisions made by automated systems based on AI and machine learning. Therefore, many authors and scholars have done significant research that can reduce the discrimination and ethical dilemmas created through AI-made decisions. For example, when large data sets or algorithms are analysed by computational processes, they often do not consider social cultural aspects and ethical complexities. Nevertheless, data analyzed on some aspects may be accurate, but it can be inaccurate in aspects such as human feelings, societal standards, or legal processes. On the other hand, income, education, ethnicity, and gender perspectives can never be included in AI-related decision-making. Hofeditz et al., (2022) state that automated decisions will always present biased data that will be a disadvantage to certain groups and diverse range of fields such as in risk assessment systems, making policies, hiring, and recruitment etc.

Legal perspective:

Many legislations have been formulated by European, and United States Governments to protect certain groups of people, such as multi-ethnic people, the LGBTQ community and other tribal groups. Chapter 3 of the EU Charter of Fundamental Rights presents all the anti-discrimination laws in their countries. In the US, Anti-Discrimination laws are incorporated in Title VII of the Civil Rights Act of 1964. This act also includes other statutes and legislation that are adhered to by various courts to make effective decisions. According to Wachter Mittelstadt and Russell (2017), automated decisions in politics often create biases and discrimination, negatively impacting general public. Limited steps have been taken by governmental organizations in the case of acquiring the trust of public in the case of algorithmic decision-making. Recent regulations are adhering to General Data Protection Regulation (GDPR) for alleviating challenges observed in anti-discrimination laws, due to increased automated decision-making. There is also lack of explanations provided in regulations about the logic behind automated decisions, which presents the accountability of governmental organizations. On the other hand, even if logic is presented to the general public about automated decisions, the questions remain whether these decisions are unbiased.

Social perspective:

Digital discrimination should also be considered in the case of socio-cultural aspects, as machine learning and artificial intelligence often fail to understand societal values, rituals and beliefs and deliver decisions that harm these aspects. The bias of AI in the case of socio-cultural perspectives can be vastly seen and observed in healthcare fields. Such as in epidemiological studies, it is shown that women have higher tendency for depression, which can be a result of AI Bias, as various symptoms of depression, are mainly observed in women more frequently due to hormonal disbalances or bodily changes, and may not always be depression (Call and Shafer, 2018). This type of epidemiological study refrains men from successful diagnosis in case of depression. Similarly, AI decisions can also be biased when measuring data sets from a larger population, refraining minorities from the analysis. Biasness can be observed in both data sets and algorithms that test data sets. Sometimes, biases in a data set containing large population are observed when some social, historical, or institutional aspects are not measured. On the other hand, algorithms in AI or machine learning based automated systems can introduce biases through irrelevant selectivity and categorization. As stated by Norori et al., (2021), several technologies such as Big Data Analytics, Natural Language Processing, AI and Robotics are used in Precision medicine to make better decisions, but exposing to potential biases in the case of gender, sex, religion, culture, etc.

Ethical perspective: According to Dhirani et al., (2023) ethical dilemmas caused by AI technologies largely are observed in cybercrimes, which included illegal perception of data, system disruptions digital identity frauds. Moral standards are constantly getting transformed due to changes brought in social structures due to technological advancements. There are many steps and laws incorporated to reduce the discrimination and ethical dilemmas caused by AI or automated decisions. Such as Isaac Asimov’s Three Laws of Robotics and Asilomer AI principles (Kaminka et al.,2017), which have presented some ethical standards incorporated into AI programs to reduce the chances of unethical decisions. Although many authors have presented the principles incorporated in both these ethical principles and stated them as not useful for general public (Stokes 2018). There had been a precise and detailed framework for solving unethical decisions by intelligent technologies such as standards presented by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (Chatila and Havens 2019).

How to address challenges related to AI?

AI, Data Analytics, and Machine learning are boon to people as these technologies are making our life easier. There are various disadvantages to it, which can be solved by human beings using these technologies, by incorporating smart tactics and strategies. In the case of representing inaccurate data biases, high-quality data can be used to reduce the possibility of any discrimination (Borenstein and Howard 2021). On the other hand, AI technologies need constant updating and development of infrastructure, which must be regularly done to avoid discrepancies in decision-making. Organisations incorporating advanced technologies also should take steps to train their employees in effective handling of these technologies to avoid any type of hindrances. On top of that, hiring recruits with AI knowledge can help in increased creativity and better knowledge of handling advanced technologies.

Conclusion

From the above discussion, it can be stated that Artificial Intelligence and Automated technologies help in efficient working and deriving fast results. Although they pose challenges in accordance with ethical, social, legal and cultural aspects. The way these technologies negatively impact had been emphasized in this studies as well steps that are taken to solve these problems have also been discussed.

Self-reflection

The course learnings helped me in exploring more varied topics that can help me in developing my knowledge base. The understanding is that through a course how I can expose myself to contemporary topics and understand their impact on the daily lives of common people. The exposure to vast knowledge had profound impact on me. This has enabled me to understand the importance of completing my course effectively compelling me to effectively communicate with my professor. I was able to develop my interaction skills and share my ideas and doubts with my professor. The learning process may be difficult sometimes, as we need to be versatile for understanding so many aspects that impact our education as well as our daily lives.

On the other hand, I chose the theme Artificial Intelligence and explored a unique side of this advanced technological development. The unique side is AI’s incapability to make ethical decisions. I found out that there are many articles, scholars who have presented their assumptions and suggestions about how AI is causing threats alongside its efficiency in decision making. AI, machine learning, and other automated systems are rapidly spreading in various industries and have already proven their efficiency in fast processing and analyzing data sets, and deriving results from them as well take decisions. I presented my arguments by referring to various scholar’s statements and views that even though AI has its efficacy, it exposes us to biases and discrimination. It is readily causing hindrances in years of understanding and formulation of effective laws that protect certain group of people. We have come to a century, where we have finally understood the meaning of diversity and inclusivity, but wrongful usage of technologies can hamper the years of studies, and break laws that were made after years of consideration.

Through deep research and gathering knowledge from reading various articles, this course had helped me to change my perspectives about contemporary topics, one of them is Artificial Intelligence and its biases. I also improved my writing skills, because of my access to other student’s writings on similar type of topics. I must say that this course is carefully articulated to develop a student, thinking, and way of understanding various perspectives and increase our research skills. Nevertheless, I can state that through course learnings, I developed my communication skills, writing skills, research skills as well as developed capability to become versatile and to understand different perspectives before making any decisions.

Apart from that, I have learned to manage my time, study for learning more about such contemporary themes, focus on completing my tasks and look forward to acquiring more opportunities for learning and developing.

References

Borenstein, J. and Howard, A., 2021. Emerging challenges in AI and the need for AI ethics education. AI and Ethics, 1, pp.61–65.

Call, J.B. and Shafer, K., 2018. Gendered manifestations of depression and help seeking among men. American journal of men’s health, 12(1), pp.41–51.

Chatila, R. and Havens, J.C., 2019. The IEEE global initiative on ethics of autonomous and intelligent systems. Robotics and well-being, pp.11–16.

Dhirani, L.L., Mukhtiar, N., Chowdhry, B.S. and Newe, T., 2023. Ethical Dilemmas and Privacy Issues in Emerging Technologies: A Review. Sensors, 23(3), p.1151.

Ferrer, X., van Nuenen, T., Such, J.M., Coté, M. and Criado, N., 2021. Bias and Discrimination in AI: a cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), pp.72–80.

Hofeditz, L., Clausen, S., Rieß, A., Mirbabaie, M. and Stieglitz, S., 2022. Applying XAI to an AI-based system for candidate management to mitigate bias and discrimination in hiring. Electronic Markets, pp.1–27.

Kaminka, G.A., Spokoini-Stern, R., Amir, Y., Agmon, N. and Bachelet, I., 2017. Molecular robots obeying Asimov’s three laws of robotics. Artificial life, 23(3), pp.343–350.

Mökander, J., Morley, J., Taddeo, M. and Floridi, L., 2021. Ethics-based auditing of automated decision-making systems: Nature, scope, and limitations. Science and Engineering Ethics, 27(4), p.44.

Norori, N., Hu, Q., Aellen, F.M., Faraci, F.D. and Tzovara, A., 2021. Addressing bias in big data and AI for health care: A call for open science. Patterns, 2(10), p.100347.

Stokes, C., 2018. Why the three laws of robotics do not work. International Journal of Research in Engineering and Innovation (IJREI), 2(2), pp.121–126.

Wachter, S., Mittelstadt, B. and Russell, C., 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31, p.841.

--

--