Is AI Ethical?

Fidutam
Fidutam
Published in
5 min readDec 10, 2023

Authored By: Naa Ameley Owusu-Amo, Editorial Writer, Fidutam
Edited By: Leher Gulati, Editorial Director, Fidutam
Reviewed By: Ronit Batra, Engagement Director, Fidutam

Is AI Ethical?

Can we trust artificial intelligence? As this new technology gains prominence in our daily lives, modern society is reaching a point at which artificial intelligence can predict events long before they occur, and even determine one’s access to essential services. Artificial intelligence, or AI, has catalyzed innovation in various fields, from finance and healthcare to entertainment and transportation.

As artificial intelligence continues to shape the future, understanding its foundational concepts and potential applications has become increasingly paramount. We are now presented with ethical dilemmas posed by artificial intelligence, and many individuals and organizations have begun to address these dilemmas with their own ideas.

The rapid evolution of artificial intelligence has raised ethical concerns in various sectors, with major industries like Hollywood releasing works such as IRobot, MEGAN, and The Creator, which present ideas concerning the moral aspect of AI, suggesting a hypothetical future in which AI’s limits know no bounds.

The White House has also addressed concerns about artificial intelligence, ranging from the safety of patient information in healthcare to tracking a user’s data consumption on everyday applications. The White House has also noted the profound impacts AI has had on society. The agriculture industry, for example, has benefited from the effective AI algorithms produced to predict storms, which helps farmers better preserve their crops and fields. The healthcare industry has also benefited from computer algorithms’ abilities to identify diseases in patients, which has unveiled cases of misdiagnosis and aids professionals in ensuring that they are performing their duties to the best of their abilities. However, despite the many ways in which AI eases the jobs of modern industries, the critical question remains: is artificial intelligence ethical, and able to maintain its objectivity, free of bias, and used consensually?

The Five Ethical Principles of AI

To explore the ethical dimensions of AI further, it is essential to understand the five ethical principles of artificial intelligence presented by the United States Department of Defense, which require AI to remain responsible, equitable, traceable, reliable, and governable. By ensuring that moral clauses are embedded into regulatory practices concerning the usage of artificial intelligence and its interactions with our world, these guidelines serve as a safeguard to ensure that artificial intelligence is used for societal improvement, rather than societal harm.

Ethical Concerns of AI

The United Nations Educational Scientific and Cultural Organization, UNESCO, sheds light on another critical issue: bias in artificial intelligence. When AI is used to aid humans in making decisions that impact the lives of many individuals, such as determining if someone can take out a loan or reading a resume through the applicant tracking system (ATS), bias can sway decisions in significant ways. The truly best-qualified individual may not be offered the job simply because their resume was not formatted the same way that the AI was trained to believe the “perfect” candidate’s resume would be; similarly, the optimistic individual hoping to take out a loan for their prospective home may be denied the loan because of a minor discrepancy that, in the case of a person to human bank teller conversation, could have been explained as an error.

These are just a few examples of cases where the intervention of AI can result in undesirable and unfair outcomes for humans, and the question of whether AI is truly ethical for human society.

Notable Initiatives and Research

MIT researcher Joy Buolamwini’s work focuses on facial recognition (titled “Gendershades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers”), and her thesis delves into the biases of facial recognition software and its ability to detect an individual based on one’s skin color and gender. The findings show that the error rate for determining the gender of light-skinned men is 0.8%; however, for women of darker complexions, the error rate increases to an astonishing 20% in one case and even 34% in another. These results highlight concerns about the level of neutrality used for artificial intelligence when algorithms learn patterns in extensive data sets. Buoalamwini notes that these data-centric techniques used to determine gender are also used for employment tracking systems, unlocking smartphones, and identifying criminals; accuracy is vital in these situations. The inability of current softwares to demonstrate neutrality can pose a tremendous risk factor when AI algorithms erroneously identify or accuse individuals.

Building Trust in AI

With this power comes a level of trust that must be built up for artificial intelligence. The IBM Legal Affairs team in the European Union poses arguments about AI Ethics and Trust, emphasizing the importance of instilling accountability into artificial intelligence. This implies that transparency and the ability of artificial intelligence to communicate with humans are essential in building trust for artificial intelligence.

Conclusion

Artificial intelligence has brought revolutionary progress to society, enhancing technological advancements and allowing mankind to discover and improve various aspects of society. With these advancements, however, we must ensure a level of objectivity and neutrality in AI. Though AI has benefited modern society in a multitude of ways, the field still has substantial hoops to cross to set standards about its morality and values.

As of today, AI is not perfect. Using these new technologies doesn’t necessarily mean harm. Technology is not always perfect, but it can be beneficial to society in unprecedented ways. In fact, Fidutam, a company built on the goal of total financial inclusion for all populations, uses technology to ensure that underserved populations are not left behind in the ongoing technological advancements worldwide.

Fidutam has developed a platform through which unbanked populations, termed as ALICE populations (Asset Limited, Income Constrained, Employed), can access a microlending system, serving as the first step to real banking and enabling underserved populations to take part in the benefits that come with this.

On a final note, the rise of artificial intelligence has sprung curiosity and doubts concerning its ethics, and it is best to use this new technology to its fullest capacities while still exercising caution and ensuring that we preserve safety and morality while doing so.

To learn more about Fidutam, click here or visit our website!

--

--