The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +772K…

Member-only story

Explaining Explainable AI

--

I’ve decided to write this article, because in my PhD thesis I’m working on a platform that will offer explainability out-of-the-box. Explainability is just one of the objectives that we want to achieve, but it is a very important part of the research.

Before jumping into the “ugly” technical part of this article, lets understand what is and why do we need eXplainable AI (XAI).

As you all heard, by Artificial Neural Networks (ANN) we are trying to copy or to simulate the internal structure and functioning of the human brain. The crazy part is that we don’t even understand exactly how does the brain work, we are just guessing, and based on something that we don’t know we’ve created the Artificial Neural Networks. The craziest part is that now we want to understand the internal structure of an ANN (which was created based on a guess), we want to see how does it learn high level representations, like classifying images and we want to debug these networks.

A very oversimplified definition of a neural network is a function which, based on the inputs can automatically adjust its parameters to approximate the output.

Usually when do we use Neural Networks to resolve a problem? When we don’t really understand the problem…

--

--

The Startup
The Startup

Published in The Startup

Get smarter at building your thing. Follow to join The Startup’s +8 million monthly readers & +772K followers.

Czako Zoltan
Czako Zoltan

Written by Czako Zoltan

I'm an experienced Full-Stack Developer, with experience in multiple domains including Backend, Frontend, DevOps, IoT, and Artificial Intelligence.

Responses (1)