A missing ingredient for mass adoption of AI: trust

Bahador Khaleghi
Element AI Lab
Published in
4 min readNov 5, 2018

AI is so hot right now. That’s confirmed by studies such as a recent Gartner report that calls artificial intelligence an emerging “mega-trend.” In spite of all the excitement, however, modern AI has yet to establish itself in the enterprise world. There are initial hurdles in terms of building AI capabilities and finding the best applications. A bigger issue for widespread adoption by large organizations, however, is trust.

Modern machine learning techniques can do incredible things with data, understanding inputs on a level previously thought impossible. But how much do people trust the outputs? How much will people accept what AI systems tell them? That’s the question of AI trustability, and this article is one in a series exploring the answers.

Why is enterprise AI still at the pilot stage?

A recent study by TechEmergence shows the majority of AI projects in enterprise are pilots. In other words, full-fledged industrial AI systems remain a rarity, save perhaps for systems developed by a handful of tech giants. There are several factors contributing to this shortfall:

First, the much talked about, and serious, AI talent gap. Many companies do not have access to an adequate pool of AI experts capable of developing effective AI systems. An increasing amount of online learning material, along with new degree and non-degree programs should help alleviate this issue in years to come.

Second, developing AI systems is much more than just data modelling. It requires the proper approach and tools to collect the relevant data and actually put it to use. The majority of efforts, in both research and open-source communities, have been focused on the modelling task so far. Recent initiatives such as the DAWN project at Stanford, Berkeley’s RISELab, MIT’s ML 2.0, and DataBricks’s MLflow are some of the ongoing efforts to address this issue.

Third, the immaturity of some of AI methodologies—see, for example, this recent article on deep reinforcement learning. While it is certainly true that modern AI approaches still have many limitations, there are still plenty of industrial applications that could benefit from AI transformation using the existing techniques in the ever-expanding literature.

The final issue, and perhaps the most significant in terms of long-term adoption, is trustability.

Imagine that the aforementioned issues have been addressed, i.e. we have a large enough pool of AI experts, who can develop end-to-end AI solutions using mature AI methodologies and tools. The challenge then becomes one of trust: what sort of assurances will companies need to adopt AI solutions? What sort of inspection or certification would be proper and sufficient? And how can organizations convince employees to establish trust in these new intelligent systems? That’s what AI trustability is all about.

In AI we trust … or do we?

Simply because an AI model has yielded an acceptable level of performance during training and testing doesn’t mean it will be trusted or adopted by end users—whether that’s employees of a company or consumers themselves. Without trust, adoption and impact will always be limited.

Despite the primary importance of trust, the majority of AI research aims at improving outcomes on widely used measures of performance, including speed and accuracy. While this approach maybe adequate for comparing AI methodologies and quantifying a certain kind of progress, it will certainly not be sufficient to convince people — especially those without technical knowledge of AI — to trust in the AI system itself.

The development of trust is an intricate process. For an AI system to be deemed trustworthy, it must address several technical, social, and psychological challenges rooted in human trust. The focus of this series is only the technical challenges of AI trustability, the constituent parts of which will be elaborated on in the next article.

What’s also important to note is that we should aim for the proper amount of trust in AI systems, i.e. in proportion to their true capabilities. Misplaced trust in AI can be very dangerous, and will eventually lead to a rupture of trust altogether. Fortunately, there is an ever-increasing awareness of this issue in the AI research community. For instance, the FAT workshop series (established in 2014), which were focused on fairness, accountability, and transparency in AI, were recently promoted to a full conference.

What’s next?

Trust in AI will be one of the main enablers of its mass adoption. Understanding this perspective will ultimately help AI researchers and practitioners develop methodologies and frameworks that enable wide-spread adoption of enterprise AI.

This is part one of a series of articles exploring the various challenges of AI trustability and how it’s being approached by researchers. The next article will present a taxonomy of AI trustability challenges.

Last but not least, if you are an aspiring data scientist, designer, data engineer, AI expert and/or hacker and find the notion of AI trustability appealing, then check out our careers page here. We are hiring!

Special thanks to Peter Henderson, Jason Stanley, Archy de Berker, Wei-Wei Lin, Philippe Beaudoin, Xavier Snelgrove, Elnaz Barshan, and Satsuko VanAntwerp for valuable comments and illustrations!

--

--