ICTC’S TECH & HUMAN RIGHTS SERIES

The Future of AI Governance

An Interview with Dr. Carina Prunkl

ICTC-CTIC
ICTC-CTIC

--

On March 26 2020, ICTC spoke with Dr. Carina Prunkl as part of ICTC’s new Technology and Human Rights Series. Carina Prunkl is a Senior Research Scholar at the Future of Humanity Institute, collaborating with the Centre for the Governance of AI. Her research focuses on the philosophy and ethics of artificial intelligence and its applications to governance and policy considerations. Kiera Schuller, Research & Policy Analyst with ICTC, interviewed Carina about the ethics of AI, and the future of AI governance.

Wikimedia Commons courtesy Xchen27 / CC BY-SA (https://creativecommons.org/licenses/by-sa/3.0)

Kiera: Thank you so much for taking the time to speak with me today, Carina! It’s great to “virtually” meet you. To begin, for our audience, can you tell us briefly about your background and how to came to work at the Future of Humanity Institute (FIH), researching the philosophy and ethics of AI?

Dr. Prunkl: Thank you very much for having me. I actually started out studying physics. It was only after my Master’s degree about seven years ago that I changed into philosophy, and even here I was mainly working on the philosophy of physics. About halfway through my doctorate at Oxford, I became more and more interested in artificial intelligence and its impacts. This was around the time of several big developments in AI: facial recognition had improved to the point where it was possible to identify people on the basis of images posted online; AI systems had become really good at imitating human voices; and of course there was also the watershed moment when DeepMind’s AlphaGo algorithm beat one of the world’s best Go players, Lee Sedol. AI systems had become able to perform an increasing variety of tasks better, faster, and at a larger scale than us humans. I became aware that AI has the potential to fundamentally transform our society. It was this realization that drove me to then work on the ethics of AI at the Future of Humanity Institute.

Kiera: What kind of topics are you exploring right now at the FIH Centre for the Governance of AI?

Dr. Prunkl: I mainly work on the ethical and societal impacts of AI. This includes research into how AI systems may affect us as individuals and as a society, but it also involves finding governance solutions that minimize negative impacts while simultaneously making sure we, as a society, keep the benefits of AI development.

In one of my research projects, for example, I look at the effects of AI on human autonomy. Autonomy here describes the ability to live one’s life according to one’s own values and standards, free from manipulative and coercive influences. There are many ways AI might have an impact on human autonomy. One example is when AI is used to manipulate people; the Cambridge Analytica scandal is a well-known example of an attempt to do this on a large scale. Another, much more subtle, example relates to the use of recommendation algorithms on video streaming platforms. These algorithms have been shown to rope users into spending more time on the platform by recommending them more and more extreme and conspiratorial videos. A second important dimension of human autonomy in the context of AI is our ability to remain in control over our lives as more and more tasks are outsourced to AI systems. Here, the question arises of whether the increasing delegation of tasks has any impact on our ability to remain in control over important aspects of our lives, and then what governance responses are appropriate to respond to these potential impacts.

Kiera: You also teach on the subject of ethics and social impacts of AI at Oxford. In light of all of this work, what are some of the most important ethical issues and social impacts we are currently facing?

Dr. Prunkl: AI systems are now able to perform an increasing number of different tasks. Some of those were previously carried out by humans, such as cross-referencing folders, driving cars, or assessing whether somebody was eligible for a loan. Other activities, in contrast, far out-scale human cognitive capacities, for example when AI systems browse millions of pictures to identify a person or analyze huge amounts of user data so as to make recommendations for news articles or videos.

There are several important issues that arise in this context. The first is one of reliability, or whether the AI system performs the task we want it to perform in a reliable and sufficiently transparent way so that we can verify that the system in fact did what we intended it to do. Currently, AI systems still have many surprises in store. MIT’s LabSix, for example, showed that an image recognition algorithm can be fooled into misidentifying a picture of a cat as a picture of guacamole by simply changing a few pixels in the original picture. This is funny when the stakes are low, but it becomes problematic when similar algorithms are used in medical applications or implemented in self-driving cars. It also doesn’t help that it is often very difficult to detect problems of this kind because algorithms can be very non-transparent, and it is often difficult for an operator or even programmer to reconstruct exactly why an algorithm gives one output rather than another. Moreover, in some cases the processes are so complex that it simply isn’t possible to find an explanation for why the system decided this rather than that. This is what is sometimes called the “black-box” nature of some AI systems.

This becomes particularly problematic when AI systems are deployed in contexts that have direct impacts on individuals or society. For example, to me it seems obvious that an AI system that is used to determine recidivism risks of criminal offenders and serves as the basis on which bail decisions are made should be explainable, in the sense that we can know why it judged one offender as having a higher flight risk than another. This, however, is not currently the case. Moreover, many algorithms are protected by intellectual property rights, making it difficult to scrutinize them from the outside. Users often have no means to analyse how their system operates or what its limitations are. Here, we really need to see changes before we can deploy AI systems for high-stake tasks.

A second big challenge arises in the context of malicious use of AI. AI systems can be very powerful, so naturally we need to make sure they are not used to inflict harm of any kind. We can already see AI being used for fraudulent purposes; for example, when voice deepfakes (realistically reproduced voices of real people) are used to trick people into paying money. Another harm, one which I already mentioned, is large-scale manipulation of people and the role AI may play in the spread of misleading or wrong information.

These are direct and immediate harms, but I think it is equally — if not more — important to also observe and possibly intervene in developments that might not cause significant harm now (though this is often debatable) but which may be detrimental to society in the long run. The use of facial recognition technology is one such example. Your face is something very personal and something you can’t change, or at least not very easily. The use of facial recognition technology in public spaces might help us to fight crimes more efficiently, but it comes at a huge cost to our privacy rights and civil liberties. Here, it is important that governments take action to ensure facial data, if its collection is deemed acceptable at all, which I believe will vary between cultures, is handled with the appropriate care and with accountability mechanisms in place. What is completely unacceptable, in my view, is the use of facial recognition technology by private companies to collect information about customers, which is already being done by some shopping malls in Canada and the US.

Kiera: What about the impacts of AI on inequality?

Dr. Prunkl: Yes, this is another important issue. In some cases, AI may reproduce or even exacerbate gender, racial, or other stereotypes. In these cases, the use of AI systems may reinforce existing inequalities in our society, for example when a hiring algorithm systematically filters out female applicants for certain positions, as was the case with one of Amazon’s algorithms. This is because AI is fundamentally data driven — it learns from the data it is being fed. If inequalities are encoded in this data, be it in the form of significantly more male applicants than female applicants or in the form of the systematic preference of male applicants over female applicants, then the algorithm will learn that a typical successful candidate is a male candidate.

Notably, for such discrimination to happen, the initial data set does not need to specifically contain data points on sensitive attributes such as gender or race. In the case of Amazon’s hiring algorithm, applicant CVs were completely anonymous. The algorithm instead picked up on data points that strongly correlated with gender, such as the name of an all-girls high school or subject combination. The fact that many sensitive attributes strongly correlate with other attributes is what makes it so difficult to get rid of such embedded biases. Therefore, it is important that we carefully consider when and for what tasks we deploy AI systems.

We may also see effects on social and economic inequalities as a result of large-scale automation. AI systems will be able to perform a variety of tasks that were traditionally assigned to humans. While the exact impacts are incredibly difficult to anticipate, it is generally thought that this will lead to increased unemployment, with some occupational groups being more affected by these developments than others. Notably, the fact that different labour markets are more or less prone to automation also means that impacts may vary across geographical regions. This means that AI facilitated automation might also have impacts on global inequality.

Kiera: What are some of the major efforts are we seeing today to address these ethical and social issues? Whether regulatory efforts, industry-design efforts, etc., how effective are these efforts and who are the main actors driving them?

Dr. Prunkl: In the last few years, we have been seeing more and more stakeholders engaging in efforts to address some of the issues listed above. On the government/regulatory level, the amount of engagement still differs significantly between countries. The UK and EU are certainly two frontrunners when it comes to regulatory efforts, but even here, we are only at the very beginning of figuring out how to best address the challenges from AI. Generally, what we currently see is an effort to include newly arising issues into existing governance structures. For example, in the UK, the permissibility of using facial recognition technology is investigated by the ICO (the Information Commissioner’s Office, basically the UK’s data-protection watchdog). Online harm and online manipulation, on the other hand, fall under the constituency of Ofcom (the regulatory authority for broadcasting and telecommunications).

Broadly, trying to embed issues from AI into existing governance structures does make sense from a regulatory point of view because there is so much overlap already. If we already have an agency in place that is concerned with the use of biometric data, it makes sense to try to incorporate the laws of facial recognition into those already-existing legal structures. Nevertheless, there are also problematic aspects of dealing with these issues in such a fragmented fashion; not only is there a lot of overlap between them, making the fragmentation seem inefficient, but some issues are novel and don’t have a natural home in any of the existing regulatory bodies. To address this problem, in part, the UK has just set up the new Centre for Data Ethics and Innovation, an independent advisory body to investigate the impacts of AI and new technologies and advise the government on how to regulate them. The Alan Turing Institute, the UK’s national institute for artificial intelligence, equally contributes in an advisory capacity.

I’ve only addressed government and regulation above, but there have been important efforts by many other stakeholders. Industry has been engaging on the topic, and we have the Partnership on AI that coordinates efforts between stakeholders. Research institutions have also engaged — in Europe, there are a lot of institutions focused on Ethics of AI. Here in Oxford, the university has just launched a new Institute for Ethics in AI and there are many different groups working on related issues.

Kiera: You collaborate with the Center for the Governance of AI on how to most practically implement ethical aspects into current and future governance solutions. As a philosopher, how do ethics come into play in AI governance? What does this look like in practice?

Dr. Prunkl: When people think about ethics or moral philosophy, they usually think of people sitting in an armchair and thinking about what is right or wrong. But the main contribution of ethics or philosophy in AI, I think, is not merely about discussing right and wrong, but about systematically thinking through the arising issues, uncovering hidden assumptions, embedding it into the wider context, and flagging ambiguities or inconsistencies. For example, a large number of ethics guidelines/principles related to AI state that we should have “fair” algorithms, that algorithms shouldn’t negatively impact human autonomy, etcetera. But this is not particularly instructive for those developing the algorithm, because fairness is such an elusive concept. What we consider fair differs from context to context and, almost more importantly, people may disagree about whether an act is fair or not. Luckily, philosophers have thought about these issues for a long time and have developed clear distinctions between different conceptions of fairness. It is only when we make these different conceptions explicit that we can start having a discussion about whether an algorithm is fair or unfair.

Philosophy, more precisely ethics, will also be important in situations where we find different values clashing with each other. Such situations will occur both in AI design but also in its regulatory context. For example, it often happens that there are trade-offs between fairness and privacy: if a firm wants to test if its algorithm is fair with respect to some fairness measure, it may not be able to do so because privacy laws prohibit the company to use certain sensitive data. Another example is the clash between safety and human control over AI systems, such as when the passenger of a self-driving car wants to interfere with the algorithm in a way that might significantly increase the risk of causing an accident. Such value trade-offs will and already do play an important role in the development and deployment of AI. Philosophers have the tools required to think through these trade-offs and can assist in determining when such trade-offs are acceptable and when they aren’t.

Kiera: We are at the end of our time and it has been such a great conversation. To end on a personal note, what are you most excited about in terms of your own work in this field going forward?

Dr. Prunkl: I am excited about the opportunity of working with different stakeholders on finding governance solutions for AI. It is important that we jointly ramp up efforts to make sure AI is developed and deployed in a safe and responsible manner so as to avoid lock-in effects that might become difficult to change in the future. Being able to contribute to this process through my research and my engagement with policy makers is one of the most rewarding aspects of my work and I look forward to finding new ways to ensure that AI is safe and beneficial.

Thank you again so much for your time today! It was a pleasure to have this conversation.

Dr Carina Prunkl is a Senior Research Scholar at the Future of Humanity Institute, where her research focuses on the philosophy and ethics of artificial intelligence. She is an affiliate of the Center for the Governance of AI and works on the implementation of ethical aspects into current and future governance solutions. Her current projects focus on autonomy and meaningful human control, as well as on the responsible development and use of AI technologies. You can read more about her work here.
Kiera Schuller, Research & Policy Analyst (ICTC), holds a background in human rights, international law and global governance. Kiera launched ICTC’s new Human Rights Series in 2020 to explore the emerging ethical, social and human rights implications of emerging technologies such as AI and robotics, including on rights to equality, privacy, freedom of expression, and non-discrimination.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)