Q&A: Frens Kroeger on
trusted AI ecosystems

People + AI Research @ Google
People + AI Research
6 min readDec 1, 2020
Illustration by Oliver Macdonald Oulds for Google

Dr. Frens Kroeger holds a research professorship at the Centre for Trust, Peace and Social Relations at Coventry University, UK. He received his PhD from the University of Cambridge, and for the past 15 years his research has focused on the role and nature of trust. Frens talked with PAIR’s former writer-in-residence David Weinberger in May 2020. Find out more about Frens’s work at www.frenskroeger.com

David: You’ve written that a new approach to trust in AI and machine learning is needed. Why?

Frens: From designers to regulators, it’s been pretty much just assumed that if you explain your algorithms, users will trust your AI application. But that rests on a whole host of underlying assumptions including that AI developers should focus on creating trust in products, and the way to do that is to convey information about the product. I think experience shows instead that trust in complex products often comes primarily from a trusted ecosystem.

David: What’s wrong with basing trust on product information?

Frens: Nothing, except too often that approach assumes that the technology is in fact already trustworthy, so all we, the AI application designers need to do is communicate that trustworthiness. It also can include the assumption that we know what the user wants to know about in order to trust the app.

I think the most important problem often is that the designers will give insight into how their algorithms work, but either the explanations are still quite technical and most users don’t understand them, or the explanations are quite simplified but then the users still need to trust in the explanations themselves — how can a user know that they are actually an accurate reflection of what is really going on? So explanations are rarely able to fully resolve the trust problem. But users often do have a basic, everyday understanding of how technological ecosystems work.

David: Can you give an example of such an ecosystem?

Frens: There are many, but I always like to say: In order to achieve appropriate trust, we will need people to trust AI the way they trust aviation.

When you get on a plane, at first glance it looks like you’re trusting the product — the plane and the service it is providing — but actually your trust is based on your knowledge of the wider ecosystem. You’re not an expert in aviation but you know there’s a regulatory apparatus governing it, you know there are systems in place so a pilot can’t fly drunk, rules for how often planes are serviced, and so forth. Even more fundamentally, you know that anyone who gets to design the machines has an engineering degree, and that an engineering degree means something, there are standards for it. You know that someone who is permitted to call themselves a pilot has gone through rigorous training. It’s knowledge about that ecosystem, but it’s a social, everyday kind of knowledge — the knowledge of someone who isn’t an expert in the technology.

We need the same everyday understanding of the AI ecosystem, although the institutional and regulatory infrastructures are likely to be quite different from those in aviation. AI is a part of everything from cars to voice assistants, and to build that trust we need governance mechanisms that are tailored to the different levels and types of risk associated with different AI applications. If there’s ever been a technology too hard for us to understand, it’s this. So we need to build an ecosystem that we can trust and that confers appropriate trust on the AI we use.

David: That’s put beautifully. But with planes, isn’t it also the case that we trust them because they hardly ever crash. Does that count as much with AI?

Frens: Of course it can be much harder to notice errors made by AI, especially in lower stakes situations. It may have made a wrong recommendation and you’ll never know. Also, planes’ safety records are rather binary, but ML is always probabilistic in its predictions: We might trust an AI movie recommendation engine that’s right only 75% of the time because we expect to measure its success in terms of probability. But plane safety isn’t like that. So we will have to make adaptations. But if we often can’t even tell whether something has gone wrong or not, in a way that makes it even more important to have background trust founded on other things, including trust that relates to integrity and benevolence more broadly.

David: Do you think these trusted ecosystems will be necessary for users to trust the sorts of AI being invisibly embedded everywhere, from type-ahead suggestions to weather reports? Or might pure reliability be more important in some cases?

Frens: Again, maybe even more so. If all of this is invisible but you have an idea that the technology is at work everywhere, it may become more suspicious to you, not less — but if you trust that there is an ecosystem at work that makes AI fairer and safer you can worry about it much less. You also want to avoid a “trust backlash,” where people use the technology quite habitually and unthinkingly but then when they find out more about it — for instance how it processes and analyzes their data etc. — they think “I was wrong to be so trusting, I won’t repeat that mistake.”

David: What would a trusted ecosystem for AI look like?

Frens: No one knows yet. Figuring that out will be one of the major tasks for the coming years.

David: But I bet you have some ideas.

Frens: Yes, I’ve been working on this question. It will take a combination of factors. For one thing, you shift focus to processes — processes that can bring about trustworthiness more reliably. Just as we trust airplanes because we have a rough understanding of the security and safety processes, we will trust AI more once we have the same sense that it is developed and employed based on solid processes.

The big companies will have to do some self-regulation — collective, collaborative self-regulation. Plus governments and independent experts will play an important role. That will mean some institution-building. Yes, I know that sounds scary, producing tons of bureaucracy and so forth. But not all institutions have to be big and cumbersome, there are degrees of institutionalisation — this is actually something I have written about quite a bit — so you can build networks that are light-footed and sufficiently nimble.

That will require involving external people in the process.

David: What sorts of external people?

Frens: I’d look for collaboration among different companies, and collaboration with academics, social scientists, and public servants who have no dog in the fight.

David: To make the results trustworthy shouldn’t there generally be processes that involve the communities affected?

Frens: Definitely. One of the big problems with the approach that thinks the single way to build trust is for the designers to explain things to users is that we’re not listening to users enough. AI development teams need specialized research that looks at the trust perspectives of users. You constantly need to stay in touch with the needs of the community, needs that go beyond efficiency. Values will factor very heavily in those. And that means that groups other than an AI business itself can help define what trustworthy AI means in the first place. That would have to be a constant concern of this ecosystem: getting feedback non-stop.

David: Which in fact is a core characteristic of ecosystems.

Frens: Yes, it’s not just about clear communication. You have to make algorithms and their development processes actually trustworthy. We’ll have to commit to that, which will take some money. And it has to be collaborative. Internal ethics boards are not enough.

Initially it’ll be a localized system. And I am quite confident that the first companies to be part of this will see very positive results from it. Then it will grow. A company will be able to say that their ecosystem is not just theirs, but spans an industry. That is what will ultimately produce appropriate trust.

David: How hopeful are you about this?

Frens: We are in a really interesting time, because now we can still turn things around. If we broaden our view on trust in AI now, and if we build an ecosystem like that of aviation that helps us produce truly trustworthy algorithms, we can build a foundation for real trustworthiness and real trust. But we have to start now.

Opinions in PAIR Q&As are those of the interviewees, and not necessarily those of Google. In the spirit of participatory ML research, we seek to share a variety of points of view on the topic.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.