AI and governance

Recent developments in AI have the potential to seriously disrupt many aspects of our lives and systems. There is a strong feeling that a technology this potent should be regulated. What is much less clear is the way in which this governance should be framed.

Research in governance

The University of Oxford has a Centre for the Governance of AI with its “Future of Humanity Institute”:

Prof. Allan Dafoe from the University of Oxford, talks about the governance of artificial intelligence:

Harvard’s Berkman Klein Centre (Internet and Society) and MIT media lab have launched the “Ethics and Governance of AI Initiative” looking at how the technology can be used for public good. It is described at,

and can be seen at:

Interestingly it is focused on three key areas: “AI and Justice”, “Information Quality” and “Autonomy and Interaction”.

Big corporations and governance

Google is examining the challenges of AI governance:

https://ai.google/static/documents/perspectives-on-issues-in-ai-governance.pdf

You can even find discussions around the use of AI for the governance of AI!

Looking across these documents it is salutary to see just how many of the systems we take for granted to regulate our complex society could be impacted by AI.

We are therefore in a situation in which the governance of AI is being actively discussed and researched.

However, are you comfortable that the governance frameworks for AI are keeping pace with the speed with which the technology is developing?

What do you think?

--

--