The Big 5 AI problems

Srini Janarthanam
Published in
7 min readFeb 13, 2019


Ethical issues facing AI systems everywhere!

Photo by Nick Fewings on Unsplash

Surely you must have heard how AI could solve all our problems in the years to come. By the same measure, you may have been hearing all the bad news too — like how AI is going to take over like Skynet did, and how we could all lose our jobs, etc. There are many problems that AI faces. For instance, hype generated vs delivery of results, lack of data, bias in data, etc. In this piece, I would like to delve on problems that AI could create (or already creating) in our human societies.

Many AI conferences, both in the industry and in academia, have started debating these problems. I could think of 5 big ones as below:


Loss of jobs



Ethical decision making

It could take a while before the industry and the governments put their head together to identify potential solutions to these issues so that AI solutions can be used without concerns. But let’s delve a bit deeper and bring out the questions.

Taking Control

What will happen if robots take control of our lives? I am not talking the Matrix or Skynet type global control with machines out there trying to wipe out humanity for whatever reasons. That would be dreadful but we are not there yet.

Talking about taking control, I am reminded of the chat between Neo and Councillor Harman in the move Matrix Reloaded. Take a look:

We may not be there yet, where we can’t switch off the machines and still survive. May be, we are.

Let us not talk of such extreme scenarios for now. But what about scenarios where our caretaker robots don’t let us or our children who they are minding, the freedom to do what we want. Imagine this: you want to get out of your home to bring your dog inside because there is a storm outside and your robot says this: “You are not allowed to go outside, there is a storm outside.”

What if your self-driving car does not let you out or stop the car where you want to. Scary! Isn’t it? Although such scenarios are plausible, I do wonder why they might happen. One reason could be that the robot or the agent is malfunctioning and/or fallacious reasoning. That calls for extensive testing and better quality controls. Another reason could be that taking control is part of the plan that the agent came up with to achieve its goal: the goals that we set up for our agents.

Loss of Jobs

Another key concern is that AI could displace a number of jobs and humans will be out of jobs as AI becomes better at doing things that only humans could have done before. AI could create new jobs, wherein humans will be programming computers with AI capability. However, that is going to be huge skill jump for those who are facing the prospect of a job loss. Most jobs that will go are those that are repetitive and mundane. Here is a list of jobs and their likelihood of them being displaced by AI based on a study by Oxford University.

Loss of jobs can hit the economy hard (Photo by rawpixel on Unsplash)

There is a lot of hype about what AI can do but can’t in reality. This may mean that the loss of thousands of human jobs may not be imminent. However, it is time for key decision makers in the industry and governments to start deliberating on this issue. How do we deal with jobs that are lost to AI? How should we restructure our educational system so that we skill our children to cope with an AI future? Or bring about laws to create a future where machines and humans form a symbiotic relationship and work together.


One of the major problems that manifests in AI today is the problem of bias. AI solutions suffer from bias depending upon the data and algorithms that drive them. There are two kinds of AI — programmed AI and learned AI. Programmed AI is intelligent behaviour programmed by designing clever algorithms. Knowledge and expertise required by the system to behave intelligently are coded in by hand by expert designers. Learned AI on the other hand learns to behave intelligently from data. Machine learning algorithms learn from thousands of annotated examples. In both cases, bias seeps in. Bias in programmed AI is due to designer bias and in learned AI is due to sparse data.

AI solutions can become biased towards a certain gender, race, or ideology depending on designers’ bias and data availability. A real world instance of racial bias in AI was exhibited in a system called COMPAS used in US to assess the likelihood of an offender committing crimes again. The system was biased against black defendants as it flagged them as high risk more frequently than it did so with their white counterparts.

An excellent read on AI biases is here. Give it a read. Recently, AI service providers such as IBM and Accenture have come up with ways to detect and fight bias in data used to train AI solutions.

AI should show no bias (Image from Pixabay)


Can AI explain itself to us or is it a black box? Why it decides to do one thing and not the other? Earlier with algorithms that learned decision trees and rules, it was simple to read out what the algorithms learned from data. But these days, with deep learning neural networks making inroads everywhere, it has become far more difficult to understand what AI knows and what it doesn’t. Businesses selling AI products cannot explain to their clients why AI did what it did, except blame or credit the data it was trained on.

Explainable AI is sort of a trending topic now with customers wanting to have the right to ask for an explanation as to why their requests were treated one way or the other by businesses. For instance, why was my mortgage application rejected. It is not enough to say, ‘AI crunched your numbers and based on the data we already have, you are ineligible.’

AI decision should be explainable (Image from Pixabay)

Ethical decision making

Another key issue in AI technology is the issue of ethical decision making. What is right and what is wrong? This is where we start wondering how AI will decide between two evils — which one is more evil than the other so that it could commit the lesser of two evils given a hard choice between them. For instance, consider the classical problem of a self driving car on a road with a failed braking system. It faces two people, one of whom it will have to crash into : an old lady and a young kid. Which of the two should it choose? Such decisions are also made by us - humans, but we never think about it beforehand and prepare ourselves. However, such questions beg for answers when it comes to deploying AI to do our work in risky environments.

How do we deal with these issues?

A possible solution to the above problems also provides us a win-win scenario in terms of the humans vs AI problem. Instead of pitting AI and humans as competitors, why not make them collaborators! AI solutions such as chatbots, robots, etc that have a human friendly interface can be paired up with trained human supervisors creating a symbiotic relationship.

AI has issues that humans can help with and humans can delegate simple, mundane tasks to AI.

Humans can be trained to understand how AI works, where it struggles and what kind of help it needs. Just as computers and the Internet were introduced into workplaces, AI could be too. Just as we were trained to adapt to new realities of working with computers and the Internet, we can be trained to adapt to AI. How can we delegate jobs to our AI colleagues? How can we supervise them and make sure they are doing a good job? And how to take responsibility for their action? And train to be better at what they do?

Humans can interpret AI decisions and will have the power to override them if necessary. Good decisions made by AI will be rewarded and such feedback can help AI to become better. Humans learning to work with AI and trusting it more will lead to a healthier workplace. By taking away repetitive chores from humans, AI can help humans break the boredom and make work exciting and purposeful.



Srini Janarthanam

Chatbots, Conversational AI, and Natural Language Processing Expert. Book author — Hands On Chatbots and Conversational UI.