Dealing with digital transformation first needs one thing: Better questions

Three Wrong Questions about Trust and Artificial Intelligence

Women in AI - Switzerland
WomeninAI

--

No trust, no use: trust is often put forward as a critical success factor when it comes to the acceptance and use of new technologies. But it’s not as simple as that. Does the use and acceptance of new technologies really go hand in hand with trust? To implement trust and AI more successfully, you first need one thing: better questions.

The use of ubiquitous smart technologies is increasing tensions between humans, machines, and society. Passionate debates are held in which the extremes of the-end-is-nigh Nostradamus followers and fervent tech evangelists seem to hold sway.

Uncertainty and skepticism are growing, which inevitably puts the spotlight on trust: how can we convince consumers to trust us? It’s a clever diversionary manoeuvre to distract from the weaknesses of your own company culture or the quality of your products. Real-world use of the word trust is soaring in design guidelines, advertising, image campaigns and the codes of ethics of tech firms, banks and other AI start-ups. But it mostly serves as a meaningless filler word that is intended to evoke some positive connotations. It could also be called “trust-washing”.

It’s high time to clear away the myths, speak plainly and stop asking the wrong questions:

1. Do you [the user] trust AI?

The question of whether someone trusts AI, or to what extent, is in fact completely pointless. The trust issue always has three dimensions: who trusts, who is trusted, and what the goal of this trust is. For example: I trust that Amazon will deliver my order promptly. But I don’t trust Amazon to use my personal data “ethically”, or that it won’t misuse it for marketing purposes and analyze me using questionable “psychographic” means.

A better question would be: Do you trust this [AI-based product] to achieve objective X?

2. How can we [the tech company] increase trust in AI?

The folks in marketing and sales departments will be clamouring to work out how to control, influence, or manipulate the consumer so that trust in AI product X and in turn the likelihood of adoption is increased. In this respect, a clear demarcation and a change in focus are needed: trust and trustworthiness are fundamentally different concepts. Trust by consumers is a mindset, whilst trustworthiness is a property of products, processes or a company. Guidelines for working on these aspects are popping up en masse. It’s clear that trust cannot be bought; it needs to be earned by demonstrating that you’re worthy of trust.

A better question would be: How can we be trustworthy?

3. Should we [the society] trust AI?

Never — as J. Bryson would say. AI-based programs are not about trust. Software needs to be trustworthy, i.e. built in such a way that its developers can be held accountable. This means we need to know and verify what a particular system is capable of and what it is not. Trust is irrelevant, as it is with bookkeeping. From an ethical perspective, the question is definitely misplaced.

A better question would be: How can we better understand AI?

Countless psychological research groups are rightly working to decode the mystery of trust and technology: How does trust influence the way in which we place our trust in technology and use it? What role do other factors play, such as the understanding of AI or perceived sense of agency in user behavior? Fatal accidents have been documented where people had too much or too little trust in technology: e.g. the well-known “death by GPS” phenomenon or the engineer who had an accident in a Tesla because he trusted the system 100% to take him to his destination without any user input. This led him to think it was safe to play video games during the trip.

To summarize, we need a nuanced view of the situation and to take a transdisciplinary path that integrates science, practice, politics, and other stakeholders. It’s high time that we ask the right questions and jointly discuss them.

Author: Marisa Tschopp, Women in AI Ambassador Switzerland

Recommended reads:

--

--

Women in AI - Switzerland
WomeninAI

We believe in the power of our community. These are writings of the Swiss chapter of Women in AI. A global non-profit to increase diversity in AI.