Should humans ever trust artificial intelligence?

Connor Upton
Design Voices
Published in
6 min readSep 4, 2019

A few weeks ago I was asked to participate in a debate at the Dock entitled “Humans should never trust artificial intelligence”. The request was to argue for this proposition but I wasn’t sure about taking it on. The audience included many senior technology leaders, innovation consultants, data scientists and engineers, so not exactly a neutral crowd. Besides, I usually find myself on the other side of this debate as I’ve been researching Human-AI interaction for a number of years, where building trust is a central theme.

My initial belief was that this battle is over. Artificial intelligence (AI) is already deeply interwoven into many of our interactions with digital platforms and the wider world. What’s more, governments are now seriously considering the wider effects of AI. In fact, one person on the opposing team was part of a high-level expert group that wrote the European Union’s Ethics Guidelines on Trustworthy AI. So it seems like trust in AI is almost inevitable.

Iphone screengrab showing a siri request asking “can I trust you?” with a response that reads “i’m not sure i uunderstand”
Siri seems to lack expertise on matters of trust.

However, as I started to research the topic I realised that words really matter. “Humans should never trust AI” is a very broad statement. This means we have to think about humans in the broadest sense. Not experts, not industry, but the general public. This means that when we talk about ‘trust’ we need to consider it in the terms that most humans understand it. This brought me to the following argument.

Humans should never trust artificial intelligence

I want you to think of a person, institution or thing that really embodies trust for you. Think about a moment when you depended on that trust.Think about how that experience made you feel.

Trust is about relationships

Fundamentally, when we talk about trust we are talking about relationships. Not necessarily between people: you may trust your school, your government, your pet. When we look at the cultural history of our relationships with AI there is a deep mistrust: from Frankenstein, Terminator and HAL, all the way up to recent a study by Pew which showed that the majority of American do not trust in algorithmic decision making. Where does this mistrust come from? Is it a luddite mindset or, like many other heuristics in life, is it based on some deep truths?

The characteristics of trust

A frequently cited study by McKnight and Chervany identified four key qualities needed to establish trust in others: competence, predictability, integrity and benevolence. So let’s take a look at how AI measures up?

First, competence — Does the other party have the ability to act in a knowledgeable manner? Sure, an AI can do this within a strict frame of reference, but AI struggles with novel situations and is unconcerned with the wider consequences of its actions. So can AI be described as truly competent when it comes to understanding the world?
Next, predictability — Does the other party behave as expected? AI is powered by data and probability. So, yes by and large it produces expected outcomes — with some notable exceptions, like the 2010 flash crash in the US stock market. But hey, what’s a 1 trillion dollar fluctuation between friends!
Then, integrity — Acting in good faith and telling the truth. Now we really start to get into trouble. Most of the time we as humans know what’s right and wrong but an AI has no moral compass. AI is used to create fake news, synthetic realities and deep fakes. It simply does what it is designed to do. It has no concept of truth so how can we trust its integrity?
Lastly, benevolence — Acting in the interest of others even at your own expense. Social capital plays an important role in trust. It stems from having shared vulnerabilities. Knowing that at some stage your selfless actions may be rewarded when you need it. As a synthetic and inorganic agent, AI does not, and cannot, share the same vulnerabilities with us.

So, competence, predictability, integrity and benevolence. Are all needed for trust to exist?

Is AI already trusted?

Some may say that through exhibiting competence and predictability AI has already earned our trust. After all billions of humans already use AI to get movie recommendations or to navigate around unfamiliar cities. But this form of convenient automation is like saying you trust your watch to tell the time. Is this what you thought about when you envisaged trust earlier?

Some may feel that it is unfair to use a social frame of reference when discussing trust in AI, but the field of AI has an obsession with anthropomorphism. Siri, Alexa, Cortana, Watson…even Clippy? Portraying AI as human-like agents has been a key strategy in encouraging adoption of the technology. In doing so the industry has asked us to trust these agents in the same way as you would trust a colleague.

But, the use of AI to automate tasks or procedures is not what we are debating today. It’s not what most humans think about when we talk about trust.

Conclusion
Trust is about relationships. Trust is a social construct and AI lacks the fundamental qualities needed for trust. It cannot possess integrity or benevolence as it does not have agency. It is a tool that is subject to the design of the programmer, the bias of the data provider, and the policies of its operator. We’ve been asked if we should trust artificial intelligence. If we do so, we are giving it more power than it deserves. Because ultimately we must remember that AI is just an interface between us and those who control it.

Should we use AI? Of course. It can make our lives easier, entertain us and help solve complex problems. But should we trust it? Never. It lacks the basic qualities that are needed for trust, so we simply can’t.

Our team followed up this argument with a further elaboration of situations where human trust in artificial intelligence has been misplaced, using examples from autonomous vehicles to algorithmic bias. We closed by questioning the framing of the question. ‘Trust in artificial intelligence’ implies trust in a technology, but what we’re really talking about is trust in the institutions that develop, own and run artificial intelligence. The opposition’s arguments focussed strongly on trustworthy AI, how this is a maturing field in a relatively new discipline, but one that is being taken seriously by academic, business and regulators. Ultimately, they argued, this is going to allow humans to trust in AI.

Surprisingly there were significant areas of alignment between the opposing sides. Those of us for the proposition argued that rather than trusting AI we need to ensure that AI systems are designed to keep humans in the loop and to support human agency and autonomy. Those against the proposition argued that the current ethical guidelines ask for human oversight and explainability in AI.

This alignment was great to see as augmentation and human agency are core themes in our new AI design capability at Fjord: Designed Intelligence. It is our new approach for unlocking the full potential of human and machine collaboration, enabled by a strategic framework, tools and activities that work across multiple levels of an organisation. To see some of the core concepts check out this talk from the Dublin tech summit.

So who won the debate? Well the voting took an interesting approach. The audience was polled before and after the debate, and was judged on opinions shifted. Before it was 15% for and 85% against the motion “humans should never trust AI”. After, it was 20% for and 80% against, so we moved 5% of the audience. A small leap for mankind? I see it more as a balancing of the scales as applications of artificial intelligence move beyond automation and human factors are not just considered but valued.

--

--

Connor Upton
Design Voices

Data Design Director @ Fjord, Design and Innovation from Accenture Interactive