Lack of transparency could be AI’s fatal flaw

The AI community needs to course correct

We need to know exactly how AI systems “think.”

Artificial intelligence has a transparency problem.

Actually, it has two transparency problems.

The first deals with public perception and understanding of how AI works, and the second has to do with how much developers actually understand about their own AI.

Public understanding of AI varies widely. Ask 10 people from different backgrounds about the current state of AI, and you’ll get answers ranging from ‘it’s totally useless,’ all the way to ‘the robots already control everything.’ Their answers will depend on how many real applications of AI they’ve been exposed to, and how many they’ve just heard about on TV. But to complicate things further, many people may not realize when they’re interacting with AI.

We know self-driving cars rely on AI, but we may not realize that our email inboxes feature dozens of “AI” integrations.

Google’s Gmail keeps getting smarter and smarter, using AI, but it doesn’t tell users exactly how it identifies spam, sorts emails into our “promotions” folder, or why its suggested responses are so invariably enthusiastic and submissive (i.e. “Got it, thank you!” and “This is great, thanks!”).

Part of our problem is a vocabulary issue. The AI community talks about developing “machine learning” and “artificial intelligence” but in reality, they’re working with spreadsheets on steroids, training a program to recognize and imitate patterns, which does not actually resemble human intelligence at all.

Or we use phrases like “neural network” which doesn’t really communicate anything to the layperson, but claims to model human thinking (spoiler alert: it doesn’t, and the results are often comically terrible). We call machines like Amazon’s Alexa an “assistant,” which sounds like it might have some intelligence, but it’s primarily a voice-activated search engine with a few scripted functions (like the timer and shopping list).

The AI community needs to do better. The public needs to have a clear understanding of what AI does so they stay engaged with its development and are able to hold developers accountable for creating the kind of AI that does more good than harm. When you have to wade through misinformation and vague explanations of what capabilities have already been developed in AI, you end up apathetic or afraid. If you don’t know what’s fact or fiction — what level of “intelligence” AI systems actually already have — you can’t advocate for prudent development.

Transparency in AI means using intuitive language to talk about the systems we are developing, how they work, and what they are capable of. And it means explaining where the data comes from.

At Mind AI we are developing a “reasoning engine.” We call it that because we’ve developed a patented new data structure that models the three kinds of human reasoning (inductive, deductive, and abductive). And the data that we use to educate the engine comes from open datasets such as WordNet and Wiktionary, and contributions from our community. We don’t train our engine on any kind of personal data. We’re not tracking an individual’s movements, to learn their behaviors, to “personalize” their experience, all the while exploiting them.

When it comes to explaining how AI works, developers have more than just a vocabulary problem. In many cases, developers don’t actually know how a machine learning algorithm got from point A to point B. It’s a black box system. Pinpointing logic errors is almost impossible. Trust is based on historical accuracy and probability, not on a verifiable formula that can be audited for accuracy and predictability.

Most AI developers agree that transparency is crucial to developing safe AI, but they differ in how they hope to achieve transparency. Machine learning developers are working on methods of gaining more clarity into the reasoning process of their algorithms — working to understand which factors influence which results. But we think this process approaches the problem backwards.

Our data structure provides complete transparency into the machine’s reasoning process from beginning to end. We can trace the logical path the machine used to generate an answer to a question, and if there is a faulty conclusion, we can fix the exact error that led to it with surgical precision. We believe this approach will be much more successful and efficient than any deep learning or neural network approach.

To stay up-to-date on our progress, sign up for our email list, talk to us on Telegram, or follow us here on Medium.

Read more about Mind AI: