photo: pixabay

The AI black box

Balázs Suhajda
Jan 29 · 2 min read

I’m currently attending an event at LTH called “AI Lund fika-till-fika workshop om AI i offentlig sektor”, and a returning theme is a trust in AI or the AI black box. This refers to the problem that it is difficult to know how a trained AI model came to a certain conclusion. And this much is true.

But compared to what? Is the human decision process any clearer? Anyone curious about the inner workings of the human mind can tell you, we are pretty much a black box ourselves. Our decisions come from the black boxes of our minds, and when we need to explain, we create a story that aligns with our self-image, our values. Plenty of experiments have demonstrated how fallible our decision processes and our way of explaining our decisions are. “Thinking, Fast and Slow” by Daniel Kahneman highlights many of them. My favorite example was asking men about their type of woman from a list of photos. After a short distraction, a photo they have not chosen was presented to them and they were asked to explain their choice. Surprisingly, they could with full conviction explain how the photo they have not picked was exactly their type… 😊

Yet, we trust ourselves and each other based on cultural, personal values. So perhaps it would satisfy our need to understand AI decisions, if they could generate valid arguments that fit into the culture and ethical values of the society, as we do?

Balázs Suhajda

Written by

lifelong learner | holistic reducetarian | life extension advocate | future space colonist

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade