Can AI have Ethics ???

Vaibhav Satpathy
Grey Matter AI
Published in
6 min readApr 17, 2021

Artificial Intelligence with Human Ethics.

The closer we get to understanding human brain, not only do we create advanced technologies or evolved intelligence systems, but we also expose ourselves to the beautiful flaws introduced into the system by laws of nature.

Now this might sound a bit philosophical topic to discuss about. But in reality the rate at which AI is developing it is very essential to understand the implications of the technology over the society.

To better understand the importance of “Ethical component” in the field of AI, throughout the article we will be drawing parallels between the functionality of human brain to artificial intelligence.

Let’s talk about some of the ethically wrong actions we humans have or tend to perform in our everyday lives —

Bias or Preference

Often in our everyday work, we simulate our own biases, knowingly or unknowingly. This could be a product of multitude of factors including our environment, upbringing, cultural norms or even for that fact our inherent nature. At the end of the day we still are biased towards something or someone irrespective of the reason.

Now let’s ask ourselves a question, who makes the data that is fed to Intelligent systems for training. Well that’s us — HUMANS. It is only natural for the systems to reflect and amplify the same biases introduced by the data which was in turn put there by an individual.

Hence it is very important to be overly cautious in terms of the data used to train a system, as all you are doing is teaching a child and introducing bias is probably the easiest thing. In order to avoid such critical errors, let’s take a look at some of the types of biases —

Selection Bias
These type of biases are one of the most commonly found ones. The basic reason behind this is that the dataset is not the apt representation of the distribution of the real world but rather is skewed towards a subset of categories. The most found example of such a bias is Speech Recognition systems in Virtual assistants. As most of the technology is built by a controlled and niche group of developers, thereby implying that some spoken accents are over presented as a part of their dataset, whereas other accents have no data at all.

Implicit Bias
This type of bias creeps in because of the implicit and intuitive assumption that we make based on selective cognitive acceptance.

Perspective matters.

Based on which segment of information one perceives the prediction tends to vary. In the same manner, based on how much of information the system is exposed to, it tends to make predictions which may be false. Now this may be cliched but the system is “Not Looking at the BIG Picture”.

It’s very important to keep in mind, HOW, WHEN, WHERE the data was collected. As that would play a significant role in the outcome of your model’s reliability.

Dishonesty

This caters to some of the more recent advancements made in the field of AI. What we are referring to is “DeepFake”. Since the discovery of Generative Networks also known as GANs, the world has not been the same anymore. We practical now have the ability to synthesise any form of data with any content, that we feel suits us.

Generating fake news — possible, Bringing back people from dead — possible, Making people say things they never have — possible, Creating art — possible.

Practically anything and everything that humans have developed in the past is now possible using AI and the more these systems are exposed to non-redacted information the closer they get to generating realistic information, making it all the more impossible to differentiate between synthesised and real information.

We then reach a point where we start questioning REALITY?

Lois Lane vs Nicolas Cage

Another one of the major examples is that of GPT-3. The launch of GPT-3 did create a lot of noise in the field of AI. Let’s understand what did the developers have in mind when they developed such a ground breaking technology.

We think that we could eventually expand the value of a tool like GPT-3 far beyond the realm of just writers, but rather as a tool for businesspeople, scientists and engineers as well. A tool that generates not just coherent paragraphs, but also coherent slide decks, product ideas or designs would dramatically help a lot of people. In the world of science for example, we often generate hypotheses that then require us to run large-scale experiments to disprove them. We can’t run experiments on the origin of the universe for example, but with GPT-3 we could generate a possible hypothesis for why it works the way it does and then simply test that hypothesis.

Every word written above in Italics is generated by GPT-3. I know it’s mind boggling, but this is the extent to which we are now vulnerable because of the same technology that we have built. As time passes by it becomes more and more difficult to differentiate reality from synthetic information.

Hence understanding the implications of “DeepFake” and “GANs” is of utmost importance to not just developers but also to the whole realm of humanity.

Accountability and Explainability

Earlier, with the traditional approach which involved the process of Feature engineering, where the data scientists used to select features and feed it to the system for training and learning process, provided more control over the system. But nowadays developers build models and feed them volumes of data, without having a clear understanding of what features or patterns did the system learn or not learn for that fact.

Although the AI based models are providing state of the art accuracies and outperforming humans in most of the tasks,

but it’s like choosing to cross a road blindfolded with the help of a stranger.

Today’s neural networks are more of a Black box than a Glass box. Although the results are extremely satisfying but there is no complete clarity in understanding of the functionalities of neural networks. There has been recent advancements and studies around “Explainable AI ”and “Responsible AI”. But we shouldn’t forget that it’s the duty of the developer to make sure that the system is put under scrutiny at every step, because AI is not only being used for personal use but for the greater good of the society and effects billions of lives.

Privacy

In the pursuit of building better AI, businesses need to collect lots of data. Unfortunately, sometimes the overstep their bounds and collect information over-zealously beyond what is necessary for the task at hand. A business might believe that it is using the data it collects for the greater good, but what if it is acquired by a company that does not share the same ethical boundaries.

All of this is at odds to the universally recognised human right of Privacy. To minimise the data we collect, we can implement technologies such as “Federated Learning” that allows us to train over user’s edge devices and only gather the learning and transfer them onto a global model for enhancing the overall performance without having the need to touch user information.

Imagine being able to train without requiring access to PII data.
This is the kind of revolution that we need.

Conclusion

At the end of the day what we need to understand is that a system that is built to imitate its creators, tends to latch itself onto the traits of the same. To build a better system, one has to be overly cautious of the training and data that is exposed to it. In the right hands it can turn out to be a blessing on the other hand who knows.

I hope this article finds you good and helps you build more ethically sounds systems for betterment of humanity. 😁

--

--