Beyond the Binary; the application of Virtue Ethics, Feminist Ethics, and Ubuntu Philosophy in AI Research

By Aahelie Bhattacharya, Global High School Fellow (Cupertino High School ‘25)

Introduction:

With the rise of Chat GPT and various uses of AI, AI Ethics has become an urgent topic of discussion in our modern world, and rising concerns for possible dangerous effects have inspired this literature review. Currently, technology is widely regarded as a detached, objective, and highly intelligent automated system that is disconnected from human relationships and personalities, as opposed to an acknowledgment of how technology is derived from human ideologies and is inherently interconnected with human experience. An understanding of technology, and especially AI, as being situated and contextual is crucial to creating positive change toward inclusivity and safety. This paper will analyze the aspect of AI in which computers are being taught to identify and respond to human characteristics (age, race, gender, and more) without user input. Beyond the Binary refers to how classification in AI responds to gender diversities that go far beyond the binaries of strictly male and female. Examples include how Polynesians have a recognized third gender known as Fakaleiti, Southern Italy has a gender called Femminiello, and the label two-spirit, which Indigenous people use to express their sexual, gender, and/or spiritual identity. Other non-binary identities include agender, genderfluid, and demigender. In this project, I will cover the shortcomings of our current machine learning systems, specifically in discriminatory identity classification, and how these are exacerbated by the disregard for the relationship between technology and society. Furthermore, I will analyze three ethical approaches and their applications in the field (Virtue Ethics, Feminist ethics-of-care, and Ubuntu philosophy) to provide possible solutions that acknowledge how complex human relationships and our society’s biases impact AI.

Discriminatory Classification in AI:

Machine learning can be supervised, unsupervised, or reinforcement learning. In supervised learning, the computer is given pre-existing categorized data and required to work out the rules that connect them. On the other hand, in unsupervised learning, the machine is given no labels and must identify inputs and outputs by creating its own partitions within these datasets. Finally, in reinforcement learning the system receives feedback from the environment (playing a video game, for instance). Mutually exclusive categorization is an issue that may stem specifically from supervised learning due to labels reflecting problematic human judgment. When training data systems in identity, this process of categorization can restrict the complexities and diversities of gender identity, in addition to general identity. Many researchers have noticed and critiqued a pattern of gender, race, and identity-based discrimination in these systems, as these systems rely solely on physical appearances to determine gender and other aspects of identity.

Our knowledge of scientific racism, a pseudoscience used to justify horrific systemic injustice, is a reminder of how dangerous classification systems for race, sexuality, and gender can be when drawing only from physical appearance. One terrible example is Germany’s Nuremberg Laws, enacted in 1935, where racial science and eugenics were used to legalize persecution of Jewish people and set a precedent for future anti-semitism in Germany. The 2018 Gender Shades paper by Joy Buolamwini and Timnit Gebru studied three commercial gender-classification systems and it was found that darker-skinned women exhibited higher error rates than any other group, with light-skinned men being the most accurate group. As this was attributed to datasets being composed of a lighter-skinned and male majority, Buolamwini and Gebru developed a new dataset that is more balanced both in terms of gender and skin color. This paper can be used as proof of how varied perspectives in AI are vital. Examples of potential risks in machine learning include assuming sexuality from headshots, aiding federal judges by calculating recidivism rates, targeted surveillance, predicting ‘criminality’ from facial features, and gauging worker competence using ‘micro-expressions.’ So far, there has been important progress made in AI concerning race and legal gender.

However, research papers such as Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities address how, “algorithms have moral consequences for queer communities, too. However, algorithmic fairness for queer individuals and communities remains critically underexplored.” One important consideration is that some human characteristics are fundamentally immeasurable. For example, when considering classifying gender in machine learning, Viviane Namaste, a professor at Concordia University, explains, “[O]ur bodies are made up of more than gender and mere performance.” This means that while gender performance is the limitation of what a computer vision system can interpret, presentation is not a precise indicator of gender. On the contrary, gender is a subjective and deeply personal characteristic. To further explore this idea, in How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis and Image Labeling Services, Morgan Klaus Scheurman, Jacob M. Paul, and Jed R. Brubaker analyzed AI by comparing a system analysis of commercial FA and image labeling services versus services using a custom dataset of diverse genders using self-labeled Instagram images. In an interview with Scheurman, a Ph.D. candidate researching identity representations in technology, he explained how varieties in gender presentation pose a barrier for machines to measure identities in a discrete or automated way.

However, this does not dismiss the real consequences that technology poses for the queer community; these include privacy, risk, and censorship. Queerness is a private aspect of identity that can pose a serious risk to mental health and safety if nonconsensually exposed, especially in countries where homosexuality is a punishable crime. While queerness may be inherently indeterminable, modern systems still impose a serious risk for queer communities regardless. For example, in 2017 Stanford built a model that was claimed to be an accurate ‘gaydar’, or a model that would be able to classify people’s sexual orientations from images. While the Stanford model was criticized for excluding people of color, bisexual and transgender people as well as for making inaccurate assumptions about queerness, this model and others create a potentially dangerous situation for any individuals who are associated. In fact, in many nations in which homosexuality and gender nonconformity are punishable, authorities already manipulate social media and LGBTQ+ dating apps to locate queer individuals.

Finally, censorship, which directly leads to queer erasure, is another serious threat that can be exacerbated by rapidly improving AI systems. In a time where progress is valued and AI is advancing rapidly, researchers must account for the LGBTQ+ community to ensure that there is also progress in preventing the everyday dangers that queer people face worldwide. So far, this section has discussed how people of color and the queer community suffer from shortcomings in machine learning. To quantify the weight of this experience, drawing from the social philosophy of Honneth’s influential book, The Struggle for Recognition, one paper explains how the confidence and well-being of an individual are intrinsically tied to the way their emotions and rights are recognized. This theory is used to describe how AI systems’ misrecognition of minority groups as well as the harmful stereotypes they may reinforce impose an obstacle for them to enjoy the same benefits of technology. The authors of this paper argue that this theory can allow stakeholders to better understand the normative implications of technological bias and that this will allow us to move forward with an approach that allows people to be recognized by technology and society in a more fulfilling way.

According to the research paper Subverting machines, fluctuating identities: Re-learning human categorization, the model of representation that our current AI systems use reveals a “lack of critical thinking about what identity actually is.” To elaborate, despite ethical guidelines asserting that humans should be regarded as individuals rather than data subjects, shortfalls in machine learning are rooted in our society’s current biases and the cultures of technology companies. AI tools and research are conducted in elite university laboratories, which historically have been affluent, White, and male-dominated spaces. According to findings of the AI Now Report, “only 18% of authors at leading AI conferences are women, and more than 80% of AI professors are men … women comprise only 15% of the AI research staff at Facebook and 10% at Google. There is no public data on trans workers or other gender minorities. Black researchers are an even smaller minority. Only 2.5% of Google’s workforce is Black, while Facebook and Microsoft are each at 4%. In an interview with Dr. Thilo Hagendorff, AI researcher and the Independent Research Group Leader at the University of Stuttgart, he explained, “This is, of course, a problem because every social group has its particular values, worldviews, and perspectives on things, and values always become part of technology … be it in the AI field via the selection of training data, be it via the way code is designed, be it why are people thinking about [a] specific application but not others. So values become part of technology hence values of one particular group, mostly white men … become entrenched into technology.” In 1972, biologists Humberto Maturana and Francisco Varela defined “autopoiesis” as a network of processes that is capable of independently reproducing and maintaining itself, similar to unsupervised machine learning.

The paper Subverting Machines, Fluctuating Identities interestingly described how this term can be used as a metaphor to describe how identity can be an autopoietic system and be theorized as feedback loops of construction and function. One co-author of this paper is Jackie Kay, a research engineer at Google DeepMind, an AI research laboratory. When asked to elaborate on this idea, Kay elaborated that the metaphor “reflects a lot of this thinking in queer theory about the social construction of gender and identity but it brings in also how we use a kind of self-categorization of identity to function in the world, to gain social solidarity with others, or even just like we might just like enjoy being a certain way, or have utility, or can’t avoid being a certain way.” In response to a question about whether some aspects of identity have no function or no construction, they added, “I’m always open to the idea that uh that there are outliers … to this theory … depends on what that particular aspect of identity is … For example, … in the paper, we talk a lot about how like the problems with essentialism and the idea that like identity is like this innate thing within people right but as a counterpoint I know that there are definitely queer people who say … I was born this way and I would never be a different way …”. In the following section, I will analyze how AI ethicists themselves have approached gendered discussions in ethics in the past.

These challenges of categorization in AI can be seen as a parallel to how humans categorize gender and other aspects of identity. When asked about this idea, Kay mentioned sorting and categorizing people for taking statistics for a census, legal reasons, and possibly more oppressive reasons as a parallel between how humans categorize gender and other aspects of identity and the specific challenges that AI algorithms encounter. Kay described “a notion of binary identity that often may originate in certain ways of thinking in human society that [is] then reflected in machine systems where, often, a binary representation is more convenient for implementing in a system.”

Beyond the Binary and the Breakthrough of Women in Ethics

Carol Gilligan, a psychologist from the 1980s, conducted a study that demonstrated that men prioritize an “ethics of justice” (morality is judged based on strict principles and applied equally to all, similar to the Justice Approach), while women prioritize an “ethics of care” (morality is judged in consideration of the context and relationships surrounding it, similar to the Virtue Approach). However, since the guideline of AI ethics has been primarily studied and organized by men, the core values they focus upon are, in turn, the ones that men emphasize. There is almost no attention paid to AI in contexts of nurture, welfare, or social duties. In the same way that this field has excluded women, the field has been deprived of the wider aspects of ethics in terms of its surrounding contexts as well. While Carol Gilligran’s binary definition of gender and other aspects of her theory can be debated in a modern context, her contributions were vital to the breakthrough of women in ethics regardless. This ethical theory was introduced to me in a paper by Dr. Hagendorff. In answer to a question about how the gender spectrum alters her ideas, he explained, “Carol Gilligan was the first important step into bringing gender perspectives into research on justice intuitions.” In this section, we will analyze the presence and impact of other women in ethics throughout the past decades and the relation to science and society’s collective view on gender and identity. In the 1970s and ’80s, feminist ethicists accepted the view of gender binarism, the idea that there can only be male and female gender identities.

At the time, there were a variety of different ethical discussions concerning the male and female approaches to ethics. One argument was that feminism could liberate both men and women from gender expectations and the restrictions placed on both genders regarding what is or is not socially acceptable, as both men and women did not have completely different values. On the other hand, some views indicated the opposite, such as Mary Daly’s Gyn/Ecology: The Metaethics of Radical Feminism in which women were encouraged to embrace the idea that they are emotional and nurturing, some of the very values that have been labeled as inferior by male standards. Other philosophers such as Alison Jaggar encouraged proposing an androgynous outlook in which the positive values of each identity should be considered altogether rather than separated. These were some of the first breakthroughs in including women in academic ethical spaces, a culture that is to this day male-dominated and generally homogenous.

The Male Focus on Calculable Solutions in AI Ethics:

As covered previously, modern technology is produced within the context of ethical frameworks of human relationships and outlooks. However, these ethical frameworks are themselves flawed and narrow-minded, resulting in abstract guidelines for technology rather than precise ones that bridge the gap between ethics and technological execution. While there is a general understanding of the importance of AI Ethics, especially with the recent boom of AI technology and its relevance in daily life, there is a lack of understanding of the field itself and its shortcomings. According to Hagendorf, the idea of a mathematical approach to “calculate” what is right is one such shortcoming, which will be dissected in this section. Dangerous flaws in image recognition technologies, chatbots, Uber, and more are only symptoms of a much deeper problem. The sector of AI Ethics’ abstract claims (normative claims that are hard to enforce) are based on a purely hetero-patriarchal perspective. However, our world is not binary and people, by nature, are much more complex than physical appearances. These technologies mirror the biases and inequalities minority groups face in society, magnified further by the severe lack of minorities in the fields of AI and AI Ethics. As an example, chatbots adopt misogynistic language when trained in online discourse, mirroring bias from societal biases and humans’ misogynistic expressions. Additionally, some gaps can be bridged in creating systems that train chatbots that may be disregarded due to the lack of varied perspectives in AI companies. In response to these moral dilemmas, AI Ethics has emphasized fairness, accountability, and transparency, specifically through the mathematical definitions of “fairness”. Rather than this strictly mathematical approach, an emphasis should be placed on how AI tools are shaped by the surrounding environment as well as the people who create them. In his research paper The Ethics of AI Ethics: An Evaluation of Guidelines, among 22 major ethical guidelines, “the aspects of accountability, privacy, or fairness appear in about 80% of all guidelines and seem to provide the minimum requirements for building and using an ‘ethically sound’ AI system”. However, notably, these standard guidelines are ones for which companies have taken steps to fulfill. Tools for bias mitigation and fairness in machine learning such as the “AI Fairness 360” tool kit, privacy-friendly techniques such as cryptography, differential or stochastic privacy, etc. The reason behind the widespread implementation of these specific ethical aspects is in part due to their mathematical, or “calculable” nature.

The Application of Virtue Ethics, Feminist Ethics, and Ubuntu Philosophy in AI Research:

Machine learning draws from the world around us, including the experiences and assumptions of its creators. Even small decisions that are made throughout the process of creating AI systems can have great impacts on the product. In AI Ethics, guidelines are created in order to improve this process and promote every decision to be a moral one. Normative claims in ethical guidelines should be applicable to ensure that they are making a significant change in how machine learning systems are conducted. While specific instructions can be improved, this paper will dive deeper into how the application of different ethical theories affects the problem at hand by analyzing three ethical approaches that can be taken. Firstly, we have established that the current approach is a deontological one, which is based upon the question of what is strictly morally obligatory, what one’s duty should be, and generally, a framework that pays little regard to the surrounding situation of the actions one is taking. In other words, our current ethical guidelines are inspired by a male, calculable approach. On the other hand, virtue ethics is an agent-centered theory in which the focus is on the totality of human life, including the nuance that certain situations and experiences hold. This nuance may alter what is virtuous, or the moral decision according to this framework. Furthermore, virtue ethics is an approach that emphasizes the education and training of agents, taking into account the individual characteristics of those who develop machine learning systems. Today, AI Ethics lacks the grounds to assign responsibility to stakeholders in the machine learning economy, which reinforces inequalities between minority groups and the majority. The implementation of virtue ethics is expected to empower each individual to take responsibility for their own actions dare to make ethical decisions, and overall gain a new perspective on AI ethics. This is essential as AI Ethics is currently regarded as a roadblock to advancement as opposed to an opportunity for growth. As Dr. Hagendorff wrote in the paper The Ethics of AI Ethics: An Evaluation of Guidelines, rather than viewing ethical guidelines as a harsh rule of what is right and wrong, or what is allowed and what is prohibited, we should draw from ethics as a resource that allows us to think creatively and boundlessly about the possibilities of AI and to cultivate a community in which people are motivated towards “broadening the scope of action, uncovering blind spots, [and] promoting autonomy and freedom.” The major changes that will come if a virtue ethics approach is adopted include the balance between the stronger focus on technological details of the various methods and technologies in the field of AI and machine learning as well as the emphasis on our society and identities. The fields of both AI and ethics (and, as should be expected, AI Ethics) are dominated by only a White male perspective. However, stakeholders include women, non-White men, non-binary and transgender people, as well as people of color. This is where I’ll introduce the Feminist Approach; the Feminist data ethics of care framework emphasizes the ‘who’ and the ‘how’ by offering resources on how these actors can intervene and make changes. This is a prime example of how Feminist ethics focuses on the totality of human life, including all stakeholders.

In order to combat the vague and heteropatriarchal-based principles of AI Ethics, this approach includes prioritizing diversity, evaluating positionality, centering human personalities at every stage, and more. Feminist ethics is an approach that centers not only on women but an intersectional view of ethical dilemmas and the lived experiences of people with various identities. This perspective draws from the ethics of care. As we have covered previously, this is the inherent approach that women tend towards when comparing the ethical biases of men and women. Specifically, this approach applies to the complex and intersecting relationships between people and how they are reflected in AI and technology. This section will cover how Feminists’ ethics of care is applied throughout the machine learning pipeline. Feminist ethics calls attention to representation biases and their resulting harm, especially in high-stakes decisions such as in banking or education. As covered in the Discriminatory Classification in AI section, a major reason for this bias is the severe lack of minority representation and action in the economy and the homogeneity of the environments in which AI is created. Additionally, it critically examines the positionality of actors in this sector, inquiring about who occupies positions while creating machine learning systems and their biases. Through the machine learning pipeline, this approach calls for the centering of human experience and relationships. To elaborate, there should be widespread recognition of the relationship between technology and our society and how each changes and shapes the other.

Image: “These Women Tried to Warn Us About AI” Rolling Stone

While working, teams should critically examine the impact of their work in our society today rather than an isolated entity. Overall, there is an emphasis, even more so than virtue ethics, on intersectionality. Feminism itself is often exclusive of women of color and disregards how race, ethnicity, class, nationality, immigration, sexuality, age, and/or disability can cause women to face further discrimination, however, this ethical approach does emphasize intersectionality. The final approach covered in this paper is the Sub-Saharan African relational-humanism philosophy, Ubuntu, which emphasizes human relationships and interdependence by teaching the principle of “I am a person through other persons”. As an individual, to have Ubuntu is to care deeply about the well-being of others, treating others with friendliness, recognition, and generosity. In the context of Ubuntu, the community is the environment a human exists in and is composed of previous and future generations, and nature itself. This philosophy addresses relationships between humans and nonhumans, including God, all living beings, and even inanimate objects. This is similar to the Feminist human-centric approach in which care for marginalized groups and individuals who are often disproportionately discriminated against through technological means as well as the environment and other animals is prioritized.

Additionally, equitable distribution, restoration, and reciprocity are taught as the community nurtures an individual, and the individual shares and gives back to the community. The application of this philosophy on machine learning would be opposed to the lack of community accountability and centralized power among successful technology companies. All in all, this philosophy focuses attention on human connections and the common good through bottom-up governance and the distribution of power and data sovereignty based on the principle of communal good.

Conclusion:

Often, the fields that require the most attention are the ones that are not only ignored but also misperceived. Discussions of the need for women in technology may often be viewed as an issue that has been completed and taken care of. In the fast-paced culture of technology companies, AI Ethics is a perceived roadblock or simply a way to placate the public’s concerns about AI technology. However, my intention with this paper is to provide a more realistic understanding of the profound possibilities of AI Ethics and an intersectional way of thinking. Intersectionality is the common factor that is critical in agents’ perspectives, in technology companies, and in ethical approaches to make fundamental changes in AI Ethics and the future of machine learning. As stakeholders must look past the technological aspect to the societal, we, too, can look beyond current mindsets into the recognition and respect we grant to one another.

Sources:

“A Framework for Making Ethical Decisions | Science and Technology Studies.” Brown University, https://www.brown.edu/academics/science-and-technology-studies/framework-m aking-ethical-decisions. AI Now Institute. “Discriminating Systems: Gender, Race, and Power in AI — Report.” AI Now Institute, 11 Apr. 2023, https://ainowinstitute.org/publication/discriminating-systems-gender-race-and-po wer-in-ai-2.

Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” MIT Media Lab, www.media.mit.edu/publications/gender-shades-intersectional-accuracy-disparities-in-co mmercial-gender-classification/. Accessed 12 May 2023.

Chace, Calum. Artificial Intelligence and the Two Singularities. CRC Press, 2018. Christina Lu, Jackie Kay, and Kevin R. McKee. 2022.

Subverting machines, fluctuating identities: Re-learning human categorization. In FAccT ’22, June 21–24, 2022, Seoul, South Korea. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3531146.3533161

Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines — Minds and Machines.” SpringerLink, Springer Netherlands, 28 July 2020, https://link.springer.com/article/10.1007/s11023-020-09517-8. Norlock,

Kathryn. “Feminist Ethics.” Stanford Encyclopedia of Philosophy, Stanford University, 27 May 2019, https://plato.stanford.edu/entries/feminism-ethics/. “On Becoming Human: An African Notion of Justice and Equity in

Machine Learning.” Sabelo Mhlambi, 5 Apr. 2019, https://sabelo.mhlambi.com/ubuntu/. Prem, Erich. “From Ethical AI Frameworks to Tools: A Review of Approaches — AI and Ethics.” SpringerLink, Springer International Publishing, 9 Feb. 2023, https://link.springer.com/article/10.1007/s43681-023-00258-9. Scheuerman,

Morgan Klaus, et al. How Computers See Gender: An Evaluation of Gender Classification In …, 7 Nov. 2019, www.morgan-klaus.com/pdfs/pubs/Scheuerman-CSCW2019-HowComputersSeeGend er.pdf.

Tomasev, Nenad, et al. “Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities.” arXiv.Org, 28 Apr. 2021, arxiv.org/abs/2102.04257.

“View of a Feminist Data Ethics of Care for Machine Learning: The What, Why, Who and How: First Monday.” View of A Feminist Data Ethics of Care for Machine Learning: The What, Why, Who and How | First Monday, https://firstmonday.org/ojs/index.php/fm/article/view/11833/10528. Waelen, Rosalie, and Michał Wieczorek.

“The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition — Philosophy & Technology.” SpringerLink, 3 June 2022, link.springer.com/article/10.1007/s13347–022–00548-w.

--

--

Columbia JSTEP
Columbia Journal of Science, Tech, Ethics, and Policy

Providing a space for interdisciplinary collaboration in writing, research, and creative solution-building to complex issues of the present and future.