In AI We Trust?

Urs Gasser
Berkman Klein Center Collection
5 min readSep 29, 2017

Do you trust Artificial Intelligence? “Not yet,” was the overwhelming response to an informal survey conducted among a small and non-representative, but rather diverse group of experts who were invited by the Digital Asia Hub to engage in a global roundtable discussion about AI and Trust during an official side event of the International Conference of Data Protection and Privacy Commissioners in Hong Kong.

The question of trust in AI is of course timely: We are witnessing a wave of AI-based technologies that make their way out of the labs into industry- and user-facing applications, and we know from history that trust is an important factor that shapes the adoption of new technology. Given today’s quicksilver AI environment, it seems fair to ask: Do we already have the necessary trust in AI, and if not, how do we create it? In the light of the diverse nature and contextuality of the many applications blended under the umbrella term “AI”, this empirical question of course needs to be further specified should it lead to meaningful answers. That said, even in its abstract formulation the question invites reflection on a series of interesting aspects involving the interplay between trust and technology.

To begin with, the question about trust vis-a-vis the proliferation of autonomous vehicles, AI-driven newsfeeds, or next-generation digital tutors, to name just a few examples, reminds us of the basic function trust serves in a functionally differentiated society: Trust is a way to reduce complexity and act successfully in a world full of uncertainty. In other words, and now applied to new technology, trust has the potential to bridge an information gap at a moment where the promises and risks of a given technical innovation are still unclear. Given the extraordinarily high levels of complexity within AI systems, and the particularly significant knowledge gap between those few who understand AI systems and those who are affected by these technologies, however, it would not come as a surprise if trust were currently not universally available to serve as a bridging and coping mechanism.

The issue of trust in the age of AI reveals another fascinating aspect: Over the past decade, there has been much debate and controversy among philosophers, ethicists, sociologists, and others about the question whether or not trust — a concept that is borne out of human-to-human interaction — can actually be applied to technology as an artefact. While the dominant paradigm still sees trust as an inherently interpersonal concept, scholars have argued that it can be applied to the human-machine context as well. Some argue that trust as applied to technology means “reliability” in the engineer’s language. Others argue that it indirectly refers back to the creator or designer of the technology. In this regard, AI might lead to a paradigm shift: As technological artifacts become autonomous, or at least less predictable given techniques such as machine learning, traditional notions of interpersonal trust may actually be more directly applicable to human-machine interactions than in the past — and even shape our understanding of interactions among intellectual artefacts.

Taken together, this suggests that we might at least borrow from a rich literature on the role of trust in society when being confronted with the rise of AI. In this regard, I find Niklas Luhmann’s work particularly helpful, including his differentiation between interpersonal trust and trust in systems — or systems trust. Luhmann’s theory of trust offers a number of possible insights, or at least hypotheses, about society’s current uneasiness to rely on trust as a (main) mechanism to cope with the complexity and uncertainty inherent to AI. At least three dimensions are worth highlighting.

  • Trust can be understood as a mechanism that emerges between two ends of a spectrum: “knowing” and “not knowing”. Someone who knows everything doesn’t need to give trust (in German: Vertrauen schenken). Someone who doesn’t know anything can simply hope. Applied to the current state of AI with the massive information asymmetries mentioned before, it seems questionable whether we as individual users or, where systems trust is concerned, the masses of users have reached the necessary equilibrium between “knowing” and “not knowing” in order to be able to activate and rely on trust. The current debates about the need for better user education and increased literacy, enhanced algorithmic transparency, and the introduction of a right to explanation of AI-based systems — often perceived as black boxes — are indicators that some of the preconditions for the formation of trust have not yet been met.
  • Trust also bridges a spectrum between the poles of total control and completely unsupervised “laissez faire”. In interpersonal trust, we have established sophisticated mechanisms and institutions to occasionally check in on the behavior of people we trust — and detect breaches of trust. In complex systems like the world of AI, however, such mechanisms can only be developed within the (technical) system, and not imposed from the outside, as Luhmann’s theory demonstrates. The debates about the need for AI accountability mechanisms as well as attempts by the industry to create monitoring and detection systems — incl. AI guardian systems — to address issues of bias, for instance, suggest that an appropriate degree of control, which is co-constitutive for trust, has yet to emerge.
http://www.afcec.af.mil/News/Photos/igphoto/2000858543/
  • Trust in specific systems, particularly technical systems, are often linked to general trust in other systems, as previous analyses demonstrate. The law — and in the context of information and communication technology, particularly data protection law — is an example of such a trust-supporting system. The basic idea is that the general trust in the availability of legal protections or remedies in cases of violation of trust in the supported system can stabilize the trust in the latter. Again, the contemporary debates about the current state of law as applied to technology and AI — whether in terms of law’s limited effectiveness in a globally networked world or the need to “upgrade” important aspects of the legal system (e.g. liability law) to keep up with AI — indicate that such trust-support is currently not readily available. This, in turn, might hamper the formation of trust in AI and, as a result, slow down the widespread adoption of AI-based technologies.

These observations also suggest, however, that trust in AI should be understood as a function of time. The speed by which information asymmetries can be balanced out, accountability mechanisms developed, and trust-supporting mechanisms introduced, will likely shape the proliferation and adoption of AI-based technologies in everyday life. In that sense, we might not yet trust in AI and all of its manifestations today, but are likely to build trust in it in the foreseeable future.

--

--

Urs Gasser
Berkman Klein Center Collection

Dean TUM School of Social Sciences and Technology, Technical University of Munich, previously Executive Director @BKCHarvard