Some Reflections on the Role of Government in Trustworthy AI
I had the honor of participating in Lanzamiento de la Ética de la Inteligencia Artifical en Colombia (“Launch of Ethics of Artificial Intelligence in Colombia”) on November 25, 2020, alongside Colombian President Iván Duque Márquez. My remarks highlight the different roles governments can (and should) play in promoting trustworthy AI. The reflections are informed by the Berkman Klein Center’s work on the ethics and governance of AI as well as ongoing learnings in the context of BKC’s Policy Practice on AI, in addition to the great and vast literature on trust, technology, and law.
Here is the transcript of my talk.
Thank you very much, dear Mr. President, dear colleagues, and Colombian friends. It’s an honor to speak to you today. First, let me congratulate you on the AI ethics framework, which is an important milestone in the implementation of the national policy for digital transformation and AI. It’s been a real pleasure working with your team, Mr. President, on some of these issues that you’re talking about today. And I’ve been impressed by your leadership and your office’s expertise and wisdom. So thank you very much for all this collaboration that you’ve supported.
Please let me use the next few minutes to reflect on the role that the government can, and I would argue should, play as we work together in this multi-stakeholder fashion towards trustworthy AI. I can build upon the excellent contributions by my colleagues and previous speakers when highlighting the crucial role trust and trust-building play in this particular moment in time. If you take two steps back, I will argue that AI — as a relatively new set of technologies, at least in terms of the application areas — poses a number of trust problems or trust challenges.
I would like to highlight only two such challenges and share them with you. The first challenge is one that we are actually familiar with as a society: the moment when new technology becomes more widely available. When we introduce a new piece of technology, there is often a risky gap between what we know and what we don’t know about the technology and its interaction with society, including the unknown unknowns. And in some interesting ways, trust is a social coping mechanism to bridge this gap of uncertainty, to enable people to take a leap of faith, if you will. And indeed, as I reviewed again Colombia’s national policy, you’ve highlighted this lack of knowledge, this problem of information gap as a key barrier to the adaptation of digital technology.
Now when we look at this trust issue, one key role of the government becomes immediately clear when it comes to trust-building. Like at this event today, governments can serve the role of a convener, of an educator to inform citizens. But of course, it’s more than just sharing information about the promise of AI. Yesterday we hosted with the president’s office a roundtable, a workshop to zoom in, quite literally, we were using Zoom, on children and young people to discuss the ways how we can use the power of education through skill-building to empower the next generation, to use these technologies in meaningful ways as they navigate their lives and building their futures.
In addition to the role of an educator, there is a second role that has become visible already today in our conversation: governments can promote trust by setting norms. And of course, the ethics framework that we’re celebrating today sets the background norms upon which expectations can become stable across the many stakeholders in the ecosystem and trust can crystallize around these expectations.
Building upon ethics, legal norms are needed to enable trust.
But of course, the Colombian government also understands that ethical norms are not sufficient. Building upon ethics, legal norms are needed to enable trust. And here again, we can learn from history when we look at the role of law in promoting trust. Law can actually help to build a bridge across the knowledge gap I mentioned at the beginning. For example, by introducing transparency requirements, as previous speakers highlighted, or by introducing monitoring obligations. It is in this context, as my colleague from the World Economic Forum mentioned, why the right to explanation or the idea of explanation within AI systems is such an important trust enabler. There is another angle to law as a tool to build trust, which is perhaps less intuitive. One of the magics that law can bring to us is actually to enable trust by anticipating that sometimes trust is disappointed, and already putting into place norms and rules that should apply once trust is disappointed. And think about sanctions or liability through that lens, and suddenly you may see law is functioning in building trust in different ways and not the dominant paradigm that law is bad for innovation, suddenly a richer narrative.
Now let me move quickly to AI’s second trust problem, which is more specific to AI. I think unlike previous technologies, trust in AI doesn’t only require that the technology works. In the past, think about trains or airplanes, or whatever technical system that was innovative at some point. What was a key thing to build trust for people was that a train brings you from A to B quite safely and so does an airplane. But what we see with AI is probably a novel trust challenge. And that has to do with what AI applications do. They are involved in decision-making unlike a train. Probably closer to modern airplanes. And the question becomes whether our AI is making sound decisions that are in our interests as users. In other words, we may move from a role of functional trust, like in the age of trains and locomotives, that is mostly a cognitive matter, to a world where it’s about fiduciary trust. This form of trust actually requires trust not only cognitively, but also emotionally. And to me, this kind of trust in technology is much harder to achieve. It’s actually harder to earn.
And so the key question becomes again what are the roles that governments can play? I would just like to highlight two. First, I would propose that governments can serve the function of a seismograph and an amplifier. In my view, going forward, it’s important to capture the real-world feedback from how AIs are used in their specific social context — What issues emerge from these interactions? — and create a learning ecosystem within the government, and within society, to learn from these often weak signals in real-time, upgrade and revise, and adjust laws and policies dynamically. And that’s a real challenge, as we all know, for those doing work in the policy space. How can we create learning by design and build that into our governance systems?
Second, governments are themselves trust proxies. As users of AI technology, which governments often are, they can demonstrate how AI indeed can support sound, yes, better decision-making. Now, unfortunately, this is a high bar. As we know from many negative headlines these days, some of the early stories when governments started to adopt and use AI have been the opposite of a success story. So I think that means we have to choose very wisely and ensure that we can demonstrate how AI serves this fiduciary role.
Now taken together a few things become clear. Governments can and do play many different roles when it comes to AI. Governments are promoters of AI, regulators, educators, users, coordinators, researchers. And that’s good news. And I see all these roles described also in Colombia’s national strategy. It means that we have many different tools, a broad toolkit for AI governance available and the Colombian government is embracing it. But there are also challenges that may immediately affect the trust equilibrium.
First, given that AI and AI applications are so contextual, and given that so many different government agencies are involved in decision-making, it’s very hard to maintain coherence in decision-making and in communication within the government. And we know, particularly now looking at COVID, that mixed signals from government agencies have a great potential to harm trust immediately. And I think that’s a particular problem in such a complex policy area where AI transcends through restrictions and plays a role in transportation, education, health, you name it. How can we create and maintain a certain coherence in both policymaking and communication?
It’s not only what decisions will be made, but also how we arrive at these value judgments […] that will be key for trust outcomes.
A second complication emerges from the fact that some of the roles I mentioned may be in tension with each other. There may be even the risk of role conflict. For instance, sometimes the interests of the government as a user may be different from the interests of the government as a regulator. Now these conflicts, or at least tensions, are often not addressed in ethics frameworks or national policies. But any resolution will of course benefit from proactive dialogue. And I would argue and emphasize, it’s not only what decisions will be made, but also how we arrive at these value judgments in case of some of these roles trade-offs or tensions, that will be key for trust outcomes.
So in conclusion, and perhaps paradoxically, I think a healthy dose of skepticism towards the promises of AI might be the most productive and sensible way to work together towards trustworthy AI. I am personally and together with the BKC team, looking forward to being part of this journey of trust-building with all of you. Thank you very much.