Designing trust into AI systems

Shubham Khatkar
Matters
Published in
4 min readJan 10, 2019

Why does building trust matter?

Artificial intelligence is becoming an inseparable part of our lives. We use AI-powered products and services every day — some more visible, like Amazon’s Alexa and chatbots, to the behind-the-scenes AI that powers our phones, social media feeds, and online advertising.

Sure, Alexa doesn’t make life-changing decisions for us. But when we inject AI into products like an autonomous car, it becomes crucial that we trust that system. And what about when we start to apply AI to more and more uses, in more and more industries? Imagine if you could receive medical care through a virtual doctor — you would need to trust the AI as much as a human doctor. Building a relationship of trust between humans and this technology needs to be a focus for both developers and designers.

How can designers help?

1. Build in transparency

To trust the decisions made by an AI system, you need to understand how it came to that decision. Unfortunately, in today’s world, machine learning is not transparent enough and remains a black box.

As designers, it’s our responsibility to build design solutions that promote transparency. We should be open with users on how these AI systems are interacting with them, what information the system is gathering, and from what sources. For example, while designing an app, be clear about what data or hardware is being accessed — and why — so users can make informed decisions and give meaningful consent. Also, give users the ability to change their mind at a later stage.

Today, the demand for personal information is largely controlled by the businesses that own these products. This needs to change: the user should be given control to decide what personal data they’re willing to share. Designers are intermediaries between businesses and users — they can help make the business case for a user-centric approach to data collection.

2. Give artificial intelligence artificial empathy

Artificial empathy is as important as artificial intelligence. It’s going to be an essential element for the successful application of AI in the future.

Right now, we tend to have a lot of cognitive trust in AI, which is how much we trust AI’s competency and ability to complete a task. But we’re lacking affective based trust — how much we trust AI with things like our personal details and data. One is based on reasoning, the other on an emotional bond. People need both to build trust.

This is where designers and technologists need to work together. Based on empathy, human-centered design can be a powerful tool to build affective trust alongside cognitive trust. Already today, AI systems can predict human emotions from face and voice recognition. For designers, the challenge is to grow artificial empathy at a pace with artificial intelligence. If we combine this technology with human-centered design, designers and technologists could create an emotionally intelligent AI in the future.

3. Design for mistakes

Just like humans, AI also makes mistakes. It’s impossible to build a system that doesn’t.

When AI systems make errors, it can come at the cost of losing the user’s trust. The more serious the situation (think: self-driving cars), the more serious the loss.

We can work to reduce the number of mistakes the AI makes, taking user feedback to help the system learn and improve. But we can also help maintain trust after a mistake by designing the AI’s response. For example, if an autonomous car crashes, the AI should walk the user through what they need to do, while the AI can call a mechanic or the emergency services for help, depending on the severity of the accident. We would also need to help the user understand errors by being transparent about the causes, and how the AI will be improved in the future.

4. Build out bias

According to research by IBM, AI bias will explode — the number of biased AI systems will increase, but ultimately only unbiased AI will survive.

Biases in AI systems can erode trust between users and machines. One of the challenges while tackling these biases is a lack of diversity in the data samples used to train the AI system. Bad datasets can replicate, or even amplify human biases. But designers can help by bringing more diversity to the data. Design research can capture the qualitative user insights that big data might fail to provide, which can be used to train more accurate and less biased AI systems.

Meanwhile, AI systems trained by a small team can become susceptible to the thinking of like-minded individuals. Multi-disciplinary teams of technologists, designers, anthropologists, and domain experts can bring a much more holistic approach to building out bias.

As we look further ahead into the future, designing trust into AI systems will only become more important. Technology is evolving from Artificial Narrow Intelligence that’s focused on a single task, to Artificial General Intelligence (AGI) — AI that can understand and reason its environment just like a human. As AGI-powered technology starts helping us with more and more complicated tasks and decisions, the need for trust will only grow more important. Let’s start designing it now.

More on this topic:

--

--

Shubham Khatkar
Matters

Experience designer. Someone who learns through observing people, traveling and reading.