How Glass Box AI Models Can Support X-AI

Slimmer AI
Slimmer AI
Published in
1 min readJun 17, 2021

--

Helping humans better understand how models come to their predictions

Photo by Christina Morillo from Pexels

Bas Roelenga, a Machine Learning Engineer at Slimmer AI, recently published an article in Towards Data Science entitled, Think outside the ‘black’ box”, in which he talks about how and when to use black vs. glass box AI models.

In his article, Bas examines and analyses how you can (still) achieve state-of-the-art accuracies while also maintaining explainability. He also talks about how building trust and explainability into our models is one of our top priorities (also seen here in Ayla Kangur’s post on explainable AI in practice).

Artificial Intelligence plays a big role in our daily lives. AI is being used everywhere, from our search queries on Google to self-driving vehicles such as Tesla. With the use of deep learning, the models used in these applications have become even more complex. In fact, they are so complex that in many cases we have no idea how these AI models reach their decisions.

As we expand how much we rely on AI, selecting the right models and focusing on explainable AI will continue to be important.

Follow us on LinkedIn and Twitter for more stories like this.

--

--

Slimmer AI
Slimmer AI

We are a venture studio. Co-building new AI-powered ventures with founders.