5 reasons why you need explainable AI

Franck Boullier
Antler
Published in
7 min readAug 3, 2020

The scariest thing about Artificial Intelligence is that we never know who the teacher is!

Source Shutterstock — k_yu

You can’t ignore Artificial Intelligence:

If you’re working on a Tech Startup, AI and Machine Learning are likely parts of your roadmap (and if it’s not, then it should be).

Artificial Intelligence (AI) is all around us. AI is there when you search for something on the Internet. AI helps us filter spam emails. AI enables Siri or Alexa to understand what we ask.

AI exists to help us answer questions, and with the answers, we hope that we can make better decisions.

It sounds good.

But then why are people like Elon Musk, Bill Gates, or the late Stephen Hawking so worried about AI?

That’s because there is a genuine risk that we will create some AIs that we won’t understand or, worse, control (hello Skynet!).

AI Explainability is a set of mechanisms that we can use to make sure that a human can understand why a machine has made a specific decision.

Below are my five reasons why you should always try to build an Explainable AI whenever you need Artificial Intelligence.

Reason 1 — AI will sometimes be wrong:

A few years back, it was autocorrect, sometimes embarrassing, sometimes fun, but with limited consequences apart from the awkward moment.

There is an endless stream of stories about AI failure because of bias, incomplete data, or incorrect models:

Be mindful of AI potential errors — Source — Shutterstock — Andrew Rybalko

Reason 2 — How do we know that this is the “right” answer?

Some questions have simple answers. We can probably all agree that (in most cases) “2” is the correct answer to the question “1+1=?”.

But, as the questions get more complicated, things get trickier.

Even a question like “Is this a cat?”, which seems simple enough, is not that easy to answer.

The first thing is to agree on the definition of “a cat”: do we consider that a tiger or a lion is a cat or not? And there are many more steps involved after that.

If the question becomes sophisticated enough, having a universally accepted definition of the “right answer” can get tricky.

Consider these questions:

  • A vs. B: who’s right?
  • Will you like this?
  • What is happiness?

There are many questions where there is no clear “right” answer.

Remember that humans can’t even agree between themselves on the benefits of vaccines, the reality of global warming, or whether the earth is round or not.

How can they trust that an AIs will be a better solution to get the right answer?

Reason 3 — The Teacher has an enormous influence on the AI:

All AIs start as little kids first, and they need to learn before they can give the “right” answer (This is a cat, yes/no).

They learn how to get the right answer from ingesting a lot of data and using Machine Learning algorithms. Then they compare their own response with what the teacher has decided is the “right” answer.

If the AI gets the answer wrong, it tries again using a different method.

When enough answers made by the machine match the “right” answers defined by the teacher, we consider that the AI has learned its lesson.

Class is over, the AI has graduated, and we release it in the wild. The AI will use its newly acquired intelligence to answer the question (Is this a cat?).

If we expect a machine to get the right answer, we need to properly teach the machine about the “right” and the “wrong” answers.

With AI today, we never know who the teacher is or what bias the teacher may have introduced in the algorithm or in the data used to teach the machine.

Screen capture from Youtube: This is one of the scientists that created Sofia, the first AI that was granted citizenship by the Saudi Arabian government.

Reason 4 — AIs are getting better and better:

It is a commonly accepted belief in the AI community that, at one point, AI will become more “intelligent” than humans:

You can think of intelligence as a railway line.

AI improvement is like a train moving on that line.

There are many different AIs and each specific Artificial Intelligence is a passenger on that train.

As the train moves past the different stations on the intelligence line, more AIs are getting out of the train, and more AIs start doing their job (answering questions) in the real world.

If you want to know more about the different types of AI and better understand the Artificial Intelligence revolution, read the excellent Wait But Why post.

First, we saw Artificial Narrow Intelligences (ANI) getting off the train. They focus on one single narrow task: filter spam emails, find the cat in that picture, drive a car.

Next, we will see Artificial General Intelligences (AGI). The train station where these AIs will “disembark” is the Human Brain’s Capacity station.

By definition, we human beings cannot move past the “Human Brain’s Capacity” train station on the intelligence railway line. It is the end of the intelligence line for all of us.

It is different for machines and AIs: they can stay on the intelligence train after that station. These AIs are called Artificial Super Intelligence (ASI).

By definition, ASIs will be more capable than any human on the “intelligence” front.

At one point, ASIs will get off the Intelligence train and start answering questions in the real world.

We want AIs to find the best way to get to “the right answer”.

There is no reason for an AI to stop trying and learn new ways to get to that “right answer”.

Source: Shutterstock — Alphaspirit

Reason 5 — We all need to trust the process:

The AI behind Google Translate made up its own new language long ago.

That new language serves a narrow purpose (translate stuff) with maximum efficiency. Still, it’s a language that is “not readable or usable for humans”.

It’s OK because many people can easily and quickly check the translation and decide if it’s good enough.

But when the question becomes complex (What is happiness? You will like this?), it is impossible to to find a consensus on a “universally right” or “universally wrong” answer.

We have to fall back on the next best thing: we need to trust and agree with the decision making process.

We need to understand the key factors, variables, and the general process that influenced the AI if we want to trust the answer to the question we asked.

We need to have some understanding of how the AI has reached THIS conclusion.

Source: Shutterstock — Andrey Popov

Never forget the importance of the “Why”:

Every management guru will tell you that people will trust you more if you can explain WHY you do something.

It should be the same for AIs:

  • AIs are becoming more and more pervasive.
  • The questions we ask AIs are becoming more and more complex.
  • The line between “right” and “wrong” is becoming harder to establish.

But most AIs today are still big black boxes.

We do not understand how an AI “connects the dots” and decides that its answer is the “right” answer to the question we asked.

It has to change.

AI Explainability is the key to having AIs we can trust.

With AI explainability, we can:

  • Trust that an AI is giving us the “right” answer.
  • Identify and flag the “wrong” answers that AIs WILL give you from time to time.
  • Understand the training bias and pitfalls of your AI training models.
  • Build more advanced AIs that are capable of giving us the best possible answers.
  • Make sure that everybody trusts the process.

The good news is that AI explainability is gaining traction: Google launched its Explainable AI suite of tools in November 2019. I expect that the other major players in the field will follow suit.

I believe that companies that use explainable AI will gain a significant competitive advantage soon.

What do you think?

Are you using Machine Learning algorithms and AI in your startup?

Have you considered explainable AIs already?

I’m Franck Boullier, serial entrepreneur, startup advisor (including Antler Team), database architect, developer, PropTech enthusiast, and generally curious about what’s going to happen next. You can reach me on LinkedIn.

--

--

Franck Boullier
Antler
Writer for

Serial entrepreneur and startup advisor, database architect, developer, Tech enthusiast and generally curious about what’s going to happen next.