The Ethics and Governance of AI: On the Role of Universities

Urs Gasser
Berkman Klein Center Collection
7 min readJan 21, 2017

Artificial intelligence is everywhere, at times obscured and sometimes fully hidden. It lurks in the Facebook newsfeed algorithm that curates the news you see, it’s being implemented in the programs of semi-autonomous vehicles that decide who lives in case of an accident, and it spectacularly beat the top Go champions in the world with its deep neural network technology. The applications of AI are evolving with increased sophistication, sparking considerable, complex questions related the social impact, governance, and ethics of its technology. These questions are particularly salient as accountability mechanisms for algorithms are yet in a nascent stage, where the balance of power is skewed towards industry giants who control these technologies. In this particular moment, the research, development, and deployment of AI is primarily taking place in the private sector, while governments around the world are increasingly contracting out their own use of these powerful technologies.

In this context, the future role of universities emerges as one that is particularly meaningful when it comes to addressing these questions of social impact, ethics, and governance of AI. Indeed, the history of AI and its emergence as a discipline in the 1950s itself is closely intertwined with academic researchers and institutions, particularly in the United States, but also in Russia and elsewhere. The magnitude of the structural changes in society to be expected from AI and related technologies and the relative concentration of power provokes the question: What is the role of universities in particular, and the social and public interest in general, when it comes to the ethics and governance of AI and related technologies?

From a meta perspective, I would argue that there are five primary roles that universities could, and plausibly should, play across disciplines in an environment imbued with the fundamental shifts that we are currently witnessing.

First, in alignment with their core mission, universities should supply open resources for the research, development, and deployment of AI systems, particularly AI applications and technologies in the public interest and for the social good. It is widely known that AI systems require significant infrastructure and data resources. And while it seems that we’re currently in a golden age of AI research, with extensive knowledge-sharing and collaborations (including efforts such as OpenAI), the trajectory is uncertain. With enormous commercial interests behind the development of these technologies, as well as increasing geopolitical nation-state interests in the deployment of AI, we can expect that this golden age of openness will not last forever. I would argue that one role of the university should be to ensure over time access to AI resources and infrastructure, and in fact, diversify and broaden access to such resources, whether they be talent and education, computing resources and computing power, or whether they are data sets that play a strategic role in machine learning, for instance.

The second role for the university in the ethics and governance of AI context going forward revolves around the functions of access and accountability. Universities can play a key role as independent and public interest-oriented institutions that can do research and develop a means of measuring and assessing AI systems’ accuracy and fairness. AI systems or AI-based decision-making systems are often black boxes. Users do not fully understand what is happening behind the newsfeeds on our social networking sites, nor do most of us understand what Siri is doing if we use a personalized assistant of this sort. And in some cases, even the makers of AI systems might not know what they will do or can unpack what such systems are going to do, with software that itself creates components of AI systems just around the corner. Some of the tools that we need moving forward are new methodologies, metrics, and criteria, coupled with trusted review mechanisms that allow us to not only assess these black boxes and peer inside the algorithms, but also discern the effects of more innocuous processes such as how we structure data sets and the like. In addition to this important reviewing function, universities can also play a role in designing corrective systems and mechanisms that can respond to adverse algorithmic judgments that are the result of AI decision-making systems and are not aligned with our society’s values. An emergent role of universities as a trusted player in an increasingly commercialized environment is valuable, and can play a vital social function that would be unlikely for companies themselves, and even governments, to play.

Enhanced by their potential role as trusted advisors, universities are also well-positioned to perform the function of impact assessment, where they would move forward as key partners in researching and developing methodologies for social and economic impact analysis of AI. This is methodologically challenging; we have been studying digital technology for many years and it is difficult to develop robust methodologies particularly when it comes to technologies that are broad in their scope of application and are embedded in manifold aspects of life and society. AI technologies pose an even greater challenge, as many of the underlying processes are largely invisible to all but the company that developed them. How do you measure the impact of an algorithmically determined Facebook newsfeed when the actual mechanism behind the filtering is opaque? I would argue that, particularly in this environment, universities and researchers can play a very significant role in establishing methodologies and determining suitable review and impact measurement factors. This role is most important, of course, if you believe in evidence-based policy-making. Universities have a key role to play in providing the evidentiary base for policy-making, which is critical in the long term. How do we ensure that we understand what these technologies are doing to society over time? This question is particularly important right now in the United States, where we are experiencing a widely-reported quasi-war on science. How do we ensure the survival of our knowledge base, and continue to build it over time?

The fourth function of the university centers on engagement and inclusion, where universities serve as conveners to bring together various stakeholders in the AI ecosystem that otherwise may not have been willing to engage in dialogue. One of the lessons learned from many years of working on digital issues is that industry players don’t naturally and by default come together for obvious reasons; they are strong competitors. Moreover, they have historically not engaged with civil society in a structured way, and the challenge of including and engaging unheard voices and underrepresented communities in these conversations cannot be overstated. Universities have the potential to help close participation gaps by inviting perspectives that are not already a part of the conversation occurring today. One initiative that we have discussed at the Berkman Klein Center is the creation of an inclusion lab, which would explore ways in which AI systems can be designed and deployed to support efforts aimed at creating a more diverse and inclusive digitally networked society. For instance, could we build programs in universities that specifically focus on AI technologies and are not only available to the digital haves, but also the digital have-nots, particularly in the Global South, in rural areas, in underserved communities, etc.?

A final role I would like to highlight that universities can and should play in the evolving AI landscape is one of a translator, as trusted interfaces between a relatively small group experts who understand the technology and different techniques of AI and the public at large, where there is generally a dearth of understanding about AI. Within universities both groups are well represented; we have AI experts and we have curious non-experts (from an AI perspective) with knowledge in other domains, including philosophers, policy folks, etc. Universities have an obligation to bridge this information asymmetry and knowledge gap, to translate the mechanics of AI, to describe its implications, opportunities, and risks, and make this knowledge broadly accessible to regular people who, wittingly or unwittingly, are exposed to these technologies. This act of translation is an urgent one, as many myths and misconceptions have already emerged surrounding AI that will make it hard for the public to make informed decisions, both as users and customers, but also as citizens.

This portfolio of universities’ potential roles, which embody their opportunities and responsibilities in this space, require a number of cross-sectional modes of engagement that play a part within and across these five roles in terms of a mindset, an orientation.

Across all roles, universities must first act as agents of integration across disciplines, stakeholders, and geographies. The field of AI is sweeping, with applications that touch just about discipline in which humans engage. Likewise, in order to respond to the demands of the rapid evolution in AI technologies, universities must break down the silos within academia and across disciplines, to enhance interoperability. Notably, there still exists a strong separation between the engineers and computer scientists developing AI techniques, technologies, and applications on one side, and the humanities, social scientists, lawyers, governance researchers, and ethicists on the other. The recent launch of the Ethics and Governance of Artificial Intelligence Fund seeks to do just that: integrate the conversation as we move forward ever rapidly, for the benefit of society and the greater good.

All of the actions taken by universities in addressing the great challenges of the age of AI require two specific mindsets that are inherent to the nature of academia: imagination and experimentation. As we design AI systems, how do we ensure that we put the technology to use as a way to solve humanity’s greatest problems, to design these systems in a way that honors the values that we hold dear? How do we keep humans, and society, in the loop? We need imagination. We will need new ways to bring together principles of ethics, design, engineering, frameworks of governance, and even the arts. These new modalities will require a fearless devotion and the unyielding courage to experiment. It is unlikely that we as a society will cope with this multifaceted technology by using a silver bullet regulatory solution, and what works must be tested over and over again. This spirit of experimentation not only applies to the development of AI technologies, but also to the development of governance systems, principles, and rules. Just as we did when the Internet emerged, we need to build, test, learn, iterate, and repeat the cycle time and again, while understanding the costs of these learning processes and safeguarding the bearers of these costs to the greatest extent possible.

From the perspective of the university, the wave of AI that has washed over the globe has sparked great opportunities. More importantly, technological developments have underscored the responsibilities and indeed, idiosyncrasies, that endow universities with the unique ability to act as providers, conveners, translators, and integrators, to leverage artificial intelligence in the public interest and for the greater good.

--

--

Urs Gasser
Berkman Klein Center Collection

Dean TUM School of Social Sciences and Technology, Technical University of Munich, previously Executive Director @BKCHarvard