Understanding Computers and Society — Ethics

Questions We Need to Answer Right Now — Part 1

Saif Uddin Mahmud
Dabbler in Destress
6 min readOct 23, 2019

--

Photo by Chris Barbalis on Unsplash

We live in a time where babies born today would grow up unable to fathom a world without smartphones and constant internet connectivity — the same way it’s hard for us to fathom a world without pen and paper. This mind-boggling technological advancement happened only very recently in human history, within the last 70 years or so. However, as we get busier and busier innovating newer ways to make life “better using our digital sidekicks (laptops, smartphones, Google and Siri for a few examples), we seldom stop and think about the societal implications. Computers are shaping how we work, play, and live; we’re going to have to grapple with these implications very soon.

In recent years, we have concocted ethical dilemmas in technology that make us question the very philosophical foundations of our morals. With the emergence of big data, we’ve seen the privacy issues created by tech giants like Facebook, Google, and Amazon. We’ve seen bias and racism creeping into algorithms we develop. We’ve seen how surveillance and the ability to rake through the data of millions of people give even more power to police states. Maybe it’s time to contemplate how we want to regulate technology in a world where ethics seem to be taking a back-seat as money and power replace it.

In a series of articles, we will explore the various dimensions of computers that give rise to questions and issues pertaining to society. In this series, we talk about Ethics, Privacy, Freedom, IP, and International Development, Design and Diversity, Gender Inequality, AI, Failures, and Sustainability.

The Ethical Dimension

Before we tackle these deep problems head-on, let’s get on the same page when talking about ethics. The questions in ethics are simple-yet-uncomfortable: How do you know right from wrong? Is it what your parents teach you? Is it what school teaches you? Is it what religion teaches you? Which religion? Is it what the law teaches you? Who makes these laws?

With the boom of the software industry, the features we implement on ubiquitous technology can change people’s lives. It can be used in good ways, and it can be used in bad ways. Take Facebook for example. It has enabled billions across the globe to come together and network in a completely new way. Yet, its vulnerability to clever newsfeed manipulation techniques allowed the Russians to meddle with the 2016 US elections. Take Google as another example, it can decide to censor some search results and save people — especially children — from gruesome content on the internet. Yet, how far can we let this censorship go? When does this censorship become a force that impedes freedom of speech? In more recent years, we have seen The Trolley Problem and its intriguing variations, challenge the ethical dimensions of Machine Learning algorithms.

Oh Dear…

To talk about these issues, we have to come to a common language. Let’s review the existing ethical frameworks and see whether they can shed some light on these problems:

Virtue Ethics is an ethical standard that emphasizes the actor’s internal character when deciding which action to take. These values can be something like courage, temperance, moderation, etc. The problem is that values can be contradictory, that the existence and relative ordering of values differ from person to person, and that we can’t prove the sufficiency of a list of values for tackling every circumstance. So far, virtue ethics seems like it is okay for personal affairs. In a diverse society where people have unique internal moral compasses though, it is hard to emulate.

Deontological Ethics claim that right and wrong are like black and white, and we must always choose the right thing to do, regardless of the consequences that stubbornness might bring.

On the other hand, Utilitarianism is consequential in nature and claims that we have to do a cost-benefit analysis and take the course of action that maximizes the benefit of the entire society. This camp is also divided into two sub-camps — act utilitarianism (an action is worth taking if the benefits outweigh the harm) and rule utilitarianism (generalize from acts to rules, in order to maximize utility). However, as you might guess, the far-reaching consequences of your action are impossible to predict — the problem is intractable. What should we maximize anyway? Happiness? Wealth? Fairness?

If you’re still not sure what each camp is saying, and I don’t blame you for not following the nuances, this article examining the ethical differences between the 3 Avenger Primes could help and be entertaining at the same time (if you’re a Marvel fan).

There are other theories as well, such as Immanuel Kant’s Categorical Imperatives where he claims we must have universalizable laws that a rational agent has to follow. Can current notions of Human Rights be the basis for such laws? Will they suffice in the face of disruptive technological advancements? Hobbes’ Social Contract Theory declares “We agree to limit our freedom of action to achieve equilibrium according to a social contract” and is based on rights that everyone has to respect. Who decides these rights? Would you be willing to give up some freedom for the greater good? Habermas’ Private-Public Sphere Model suggests you can do whatever you want in the privacy of your home, but you must respect the law when you’re in public. But does that mean you can do illegal things when you’re at home? What about domestic abuse? What about shifting public outlook that eventually leads to conflict (anti-segregation, LGBTQ movement, antisemitism, islamophobia, etc)? Should we judge people of the past who supported discrimination? If you don’t judge, are you justifying it/defending them? Should future generations judge us?

As you can imagine, the dilemma of which philosophical perspective to follow when talking about technology is difficult (if not downright impossible), especially in today’s globalized world where citizens from different societies mingle all the time. Regardless of whether you’re in a homogenous society or in a polarized community, multiple shifting frameworks are at play.

Now try and examine these cases to see which side of the camp you fall on: State of Indiana vs Ford Motor Company, Would you kill baby Hitler? Should Batman kill the Joker? Or you may want to apply them on a more personal level — Should you get a ticket for driving a bit over the speed limit? Cheat on a test? Lie to your parents? Use fake names on the internet? Ship a much-marketed game with critical bugs just because it is the release date? Train your personal neural network on school PC clusters? Does Technology follow ethics? Or does Technology shape ethics? What are the ethics of interacting digitally? Is Cyberbullying wrong? Is lying in tweets wrong?

As a society — as a global civilization — we humans need to be able to provide an answer to these questions. These answers will steer policy-making in the upcoming years. These answers will let us decide whether a new technology is creepy or useful — Google Maps, Apple Watch, Google Glass, Neuralink, Nanobots in your bloodstream, Robo-Partners, the pursuit of Artificial General Intelligence, a quest to become super-human, the pursuit of immortality? We can’t stop exponentially growing technology, so we should solve the Digital Ethics Problem as soon as possible so that our laws don’t become obsolete at an unprecedented rate.

Some of the questions we asked here had easy answers for you, perhaps even knee-jerk reactions. It’s important to understand that there is someone in the world who got a knee-jerk reaction similar to you, only in the opposite direction.

Do you think any of the existing ethical frameworks of today answer any of these questions decisively? Do you think it is possible to come to an agreement on any of these questions? Or do you think we should develop a separate/additional framework when talking about technology? If this piece got you uncomfortable — VERY uncomfortable — then that’s good. It is time to ponder these questions over and let us know your opinion in the comments below!

Now that we have a common language to talk in, let us look at the glaring issues of today. In the next part, we will look at another widespread concern in tech — Privacy and Freedom.

I’ve tried to break the multi-part series down to readable chunks, tackling issues at a time, so that the main message is not diluted. If you have any feedback regarding the article, feel free to reach out to me. If you liked the article and think more people should know about it, pass it to a friend!

This piece was inspired by CSC300: Computers and Society, taken at the University of Toronto during Fall 2019, under Ishtiaque Ahmed. Please note that the opinions in this piece are only my own.

--

--