Ethical And Moral Dilemmas of AI

Joann Elizabeth Panicker
9 min readDec 13, 2020

--

Harriet Lee-Merrion

Coffee, umbrellas, ice cubes, the telephone, the printing press, planes, computers, bicycles, and vaccines! What do you think is common?

Well, besides all of them being pretty major inventions, these are some creations that were considered superfluous, dangerous, or impractical in their initial years. And it’s no surprise, human beings have always been skeptical about major changes in their way of living! We thrive on familiarity, and anything pushing the boundaries we’ve carefully cultivated over the years appears to pose a threat to our society. But if we know that, why engage in these conversations at all?

And moreover,

Why care about Ethics?

Koma Zhang for Quanta Magazine

Most obvious laws that govern our world, especially in relation to our values, attitudes, practices, and political communities, essentially stem from our ambiguous (and often subjective) sense of morality, developed through years of evolution and living as a social animal. We know that lying is bad, stealing is wrong, and murder is unacceptable!

But when it comes to more complicated issues, like the ethics of genetic modification, or our stance on privacy in the new digital age, our idea of what is considered conclusively acceptable is constantly questioned — because as new technologies emerge, our world’s “normal” changes. With more parameters introduced into our equation, it then stands to reason that our frame of reference must be adapted to as well.

Hence, there are innumerable arguments questioning the ethics of the emergence of AI. The sections of this article, in particular, will address the moral capacity of AI systems, the moral issues that arise from the misuse of such technology, and the moral status of the machines themselves.

But in all this, my favorite part? Solving many of them would require us to question our biases and mark our moral principles down to a science — we get to explore and understand the human race better than ever! Further, working in technology means that we get the opportunity to make our innovations more inclusive and fair — which is the way to go, because indisputably, AI is our future. It is in every field, highly integrated into our lifestyle, and heavily influencing nearly every decision we make. Sounds like a lot of pressure, doesn’t it?

So let’s get to it! What are some of the key dilemmas that present itself in this vast equivocal sphere of morality?

AI in Cognitive Decision Makin

Nobel laureate Daniel Kahneman, at the Toronto conference on the Economics of AI, argued that the decision-making process of humans is “noisy” and therefore should be replaced by algorithms “whenever possible” (cited in Pethokoukis, 2017).

In a world that practically yearns automation, Artificial Intelligence will soon be in charge of making most major decisions in the world — financial, judicial, diplomatic, educational, and even military decisions. So the question arises:

Can AI be taught to be “moral”?

Can we trust an AI system’s Moral Compass?

The basis of developing AI involves breaking down data, problem statements and decisions to its bare components, and evaluating them based on previous knowledge and reinforcement. However, moral questions are often not so binary, and are highly situation-dependent — there are entire fields of studies in philosophy dedicated to morality and ethics alone. And to always ensure the best interest of the people, there are questions that need to be answered before we hand over the baton of important cognitive decision making to our man-made brethren.

For example, how do we develop a universal language for morality?
How do we teach the empathy, instinct and wisdom in decision-making to AI?
How do we believe that the humans training A.I. to be ethical are themselves ethical?

The last question especially, highlighting the human constraints of subjectivity and prejudice, brings us to our next problem -

Bias

You’d think machines wouldn’t have the human tendencies for bias, yes? Impartial decision making ought to be a point for Team AI? Well, no.

Image credits: Dogtown Media

The idea that ML algorithms are inherently neutral is a commonly held one, but we must remember — AI primarily functions on data. And as it turns out, our data is quite biased.

Amazon most famously ran into a hiring bias issue after training an A.I.-powered algorithm to present strong candidates based on historical data. Because previous candidates were chosen through human bias (favoring male candidates), the algorithm favored men as well.

Further, facial recognition algorithms made by Microsoft, IBM and Megvii all had biases when detecting people’s gender and race, detecting the gender of white men more accurately than the gender of darker skin men.

Jenice Kim

Addressing this bias forces us to confront the underlying structural inequalities. Whether explicit or implicit, biases are the symptom of a lack of diversity within the people who build the technology. Women and minority groups remain underrepresented in the technology field which makes it harder to represent humanity and overcome biases correctly.

And while unintentional prejudice can be fixed, how do we enforce perfection in an often unjust society? Fun Fact — in June this year, ICE (Immigration and Customs Enforcement, US) modified its own risk assessment algorithm so that it could only produce one result: the system recommended “detain” for 100% of immigrants in custody.

Really makes you wonder about exactly how much the powers of AI can be misused on a large-scale, no? And you’d think the solution to this is transparency in this decision-making process — if we are going to use machine learning algorithms to make any sort of worthwhile decision we get to demand that it be able to explain itself. Great idea, except revealing the algorithm would make it easy for one to beat it- defeating its entire purpose!

And while it’s now evident how our biases pass on to our machines, what power would these machines have over influencing our decisions?

How do these machines affect our behavior?

Corinne Reid for Quanta Magazine

“In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01).

The internet, at any given time, holds an insane amount of our data. Check out the “Data and Personalization” tab of your Google account, for instance, and you’ll see that it has enough information about you to successfully and accurately guess your profession, age, financial status, relationship status, the kind of house you live in(?), hobbies, and preferences on just about anything. This isn’t really surprising- we spend a huge amount of our lives on the internet. But that much data? Powerful! Especially from a Marketing standpoint- and what is the world, if it isn’t all about the marketing?

We know how much we tend to live and breathe on social media, and that dependency isn’t random- nearly every element we scroll past is used to capture attention or trigger the reward centers in the human brain. It’s all often optimized with A/B testing, a rudimentary form of algorithmic optimization for content to capture our attention. And once they learn to keep our attention and assess our data, they use it to break down and essentially map out the fundamentals of what makes something “sell” — from the colors that stand out to us, words that can be used to clickbait, features that makes media addictive, to even the key demographics that something is popular in.

So what? Social media uses AI to acquire and analyze our data to provide more personalized content — what’s so wrong?

That’s where the lines get blurry. By using data we hadn’t (up until recently) knowingly consented to providing, not only is our privacy being violated, but using it to find out what makes our clocks tick and monetizing on it is definitely more than just morally ambiguous — it’s a form of deception, manipulation, and exploiting of human weaknesses. And in the wrong hands, it’s also extremely dangerous.

For instance, AI is allegedly said to play a huge role in elections and influencing voters all around the world. By targeting specific content, without confirming its validity, to the specific demographics previously proven (according to intelligent data collection and analytics) to engage with similar material can create a potentially powerful polarizing network that can help in reinforcing prejudice and propagating propaganda. Democracy becomes less about public opinion and more about who has better resources to assist manipulation.

And not only does AI help in targeting and spreading this content, but it also greatly assists in creating it, from the misinformation to the deep-fakes. AI systems are getting really good at creating fake images, videos, conversations, and all manner of content.

Some Super Scary Statistics!

And with the internet being a primary source of information and news for most of the world, not having sufficient regulation on this could lead to things getting way out of hand. Think worst case scenario, and it’d be possible!

But beyond the conversations of cognitive skills and misuse, the discussions on the ethics of AI would be incomplete without asking ourselves about how we should treat them. Considering the future emergence of rational thinking sentient robots and AI systems, at what point would we consider them moral beings?

Should We Treat Robots Like People?

Image Credits: Shutterstock

Will these Machines be considered morally relevant beings and have their own rights? What is it exactly that gives humans a moral point of view - does it lie in its ability to have feelings and self-conscious awareness? Or is it the ability to feel pain? And once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status too.

Right now, AI systems are fairly superficial, and so it is widely agreed that current AI systems have no moral status. But AI is modelled with reference to human behavior, such as the very basis of reinforced learning being similar to the human brain’s reward system. They are becoming more complex and life-like as we speak, so figuring out how their moral situation will soon be a subject highly debated — and one that, like many of these questions, will involve dusting off our old philosophy books.

Conclusion

It’s to be noted that a lot of the above addressed matters arise at the advent of any major technological revolution- the general concerns of misuse, over dependency, job loss due to replacement (an especially valid distress in the case of AI- but no, this technology would merely create new streams of jobs instead) and just the apprehension of something so new.
Advances in technologies so far have arose from our need to automate redundant tasks, and all we’re working right now is expanding that definition. It’s important to see AI as a collaboration rather than a replacement, but a way to make the human race better and more error free than ever. And just as any new major system evolves, so does the need to constantly evaluate, regulate and innovate our tech- and who better to do it than us? :)

Read More

.

.

.

Follow us on medium for more for similar stories. Follow us on Social media to stay in the loop- Facebook| Instagram|Telegram Channel |Youtube|Twitter. We also invite guest writers to publish their material via this blog!.

--

--